text
stringlengths
11
320k
source
stringlengths
26
161
The Wolf effect (sometimes Wolf shift ) is a frequency shift in the electromagnetic spectrum . [ 1 ] The phenomenon occurs in several closely related phenomena in radiation physics , with analogous effects occurring in the scattering of light. [ 2 ] It was first predicted by Emil Wolf in 1987 [ 3 ] [ 4 ] and subsequently confirmed in the laboratory in acoustic sources by Mark F. Bocko, David H. Douglass , and Robert S. Knox, [ 5 ] and a year later in optic sources by Dean Faklis and George Morris in 1988. [ 6 ] In optics , two non- Lambertian sources that emit beamed energy can interact in a way that causes a shift in the spectral lines. It is analogous to a pair of tuning forks with similar frequencies (pitches), connected together mechanically with a sounding board; there is a strong coupling that results in the resonant frequencies getting "dragged down" in pitch. The Wolf Effect requires that the waves from the sources are partially coherent - the wavefronts being partially in phase. Laser light is coherent while candlelight is incoherent, each photon having random phase. It can produce either redshifts or blueshifts, depending on the observer's point of view, but is redshifted when the observer is head-on. [ 3 ] For two sources interacting while separated by a vacuum, the Wolf effect cannot produce shifts greater than the linewidth of the source spectral line , since it is a position-dependent change in the distribution of the source spectrum, not a method by which new frequencies may be generated. However, when interacting with a medium, in combination with effects such as Brillouin scattering it may produce distorted shifts greater than the linewidth of the source. This scattering –related article is a stub . You can help Wikipedia by expanding it . This spectroscopy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wolf_effect
The Wolf summation is a method for computing the electrostatic interactions of systems (e.g. crystals ). This method is generally more computationally efficient than the Ewald summation . It was proposed by Dieter Wolf. [ 1 ] This computational physics -related article is a stub . You can help Wikipedia by expanding it . This atomic, molecular, and optical physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wolf_summation
The Wolfbox is the name for the original passive DI unit , direct box , or DI as invented in the late 1950s by Dr. Edward Wolfrum, PhD, alumnus engineer of Motown , Golden World Records , Terra-Shirma Studios, Metro-Audio Capstan Roller Remote recording, and United Sound Systems in Detroit, Michigan . [ 1 ] [ 2 ] Used by James Jamerson , Dennis Coffey , Bob Babbit and other The Funk Brothers , the Wolfbox was a key component in the 1960s and 1970s sound of recorded music in the Motown/Detroit scene. According to Wolfrum, the idea for the creation of the device originally came to him out of necessity, from "…Recording bands back then [early 1960s] and the fact that I simply couldn't afford microphones." [ 3 ] It was at Detroit's WEXL in 1962 that 16-year-old staff engineer Wolfrum incorporated his newly created passive direct interface box – later known at the "Wolfbox" – as an interface from the high-impedance output of church PA systems to the microphone input of broadcast audio mixers. In 2013, a limited-edition run of 25 new Wolfboxes were designed, plotted, supervised & signed by Dr. Wolfrum, [ 4 ] in a non-exclusive (unlicensed) agreement with Acme Audio Mfg. Company. Using NOS components and original A-11J and A-12J triad transformers sourced from vintage gear, these new versions found their way to such places as Nashville's Blackbird Studios, London's Abbey Road , and to Blue Note Records President and bassist Don Was. [ 5 ] After the 25-unit production, Dr. Wolfrum ended his Acme collaboration [ 6 ] and released his schematic of the Wolfbox in 2014 for free public non-commercial use. [ 7 ] [ 8 ] Now Acme Audio Mfg. Co. produces The Motown DI which uses a new transformer inspired by the original OEM Triad transformers. The original Wolfboxes relied on vintage A-11J and A-12J Triad transformers (manufactured up to 1974) whose metal structure (i.e. Molybdenum composition) became regulated due to mining and manufacturing toxicity [ 9 ] by OSHA [ 10 ] and EPA restrictions. [ 11 ] [ 12 ] [ 13 ]
https://en.wikipedia.org/wiki/Wolfbox
Wolff's law , developed by the German anatomist and surgeon Julius Wolff (1836–1902) in the 19th century, states that bone in a healthy animal will adapt to the loads under which it is placed. [ 1 ] If loading on a particular bone increases, the bone will remodel itself over time to become stronger to resist that sort of loading. [ 2 ] [ 3 ] The internal architecture of the trabeculae undergoes adaptive changes, followed by secondary changes to the external cortical portion of the bone, [ 4 ] perhaps becoming thicker as a result. The inverse is true as well: if the loading on a bone decreases, the bone will become less dense and weaker due to the lack of the stimulus required for continued remodeling . [ 5 ] This reduction in bone density ( osteopenia ) is known as stress shielding and can occur as a result of a hip replacement (or other prosthesis). [ citation needed ] The normal stress on a bone is shielded from that bone by being placed on a prosthetic implant. The remodeling of bone in response to loading is achieved via mechanotransduction , a process through which forces or other mechanical signals are converted to biochemical signals in cellular signaling. [ 6 ] Mechanotransduction leading to bone remodeling involves the steps of mechanocoupling, biochemical coupling, signal transmission, and cell response. [ 7 ] The specific effects on bone structure depend on the duration, magnitude, and rate of loading, and it has been found that only cyclic loading can induce bone formation. [ 7 ] When loaded, fluid flows away from areas of high compressive loading in the bone matrix. [ 8 ] Osteocytes are the most abundant cells in bone and are also the most sensitive to such fluid flow caused by mechanical loading. [ 6 ] Upon sensing a load, osteocytes regulate bone remodeling by signaling to other cells with signaling molecules or direct contact. [ 9 ] Additionally, osteoprogenitor cells, which may differentiate into osteoblasts or osteoclasts, are also mechanosensors and will differentiate depending on the loading condition. [ 9 ] Computational models suggest that mechanical feedback loops can stably regulate bone remodeling by reorienting trabeculae in the direction of the mechanical loads. [ 10 ]
https://en.wikipedia.org/wiki/Wolff's_law
The Wolff algorithm , [ 1 ] named after Ulli Wolff , is an algorithm for Monte Carlo simulation of the Ising model and Potts model in which the unit to be flipped is not a single spin (as in the heat bath or Metropolis algorithms ) but a cluster of them. This cluster is defined as the set of connected spins sharing the same spin states, based on the Fortuin-Kasteleyn representation . The Wolff algorithm is similar to the Swendsen–Wang algorithm , but different in that the former only flips one randomly chosen cluster with probability 1, while the latter flip every cluster independently with probability 1/2. It is shown numerically that flipping only one cluster decreases the autocorrelation time of the spin statistics. The advantage of Wolff algorithm over other algorithms for magnetic spin simulations like single spin flip is that it allows non-local moves on the energy. One important consequence of this is that in some situations (e.g. ferromagnetic Ising model or fully frustrated Ising model), the scaling of the Multicanonic simulation is N 2 {\displaystyle N^{2}} , better than N 2 + z {\displaystyle N^{2+z}} , where z is the exponent associated with the critical slowing down phenomena. This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wolff_algorithm
The Wolff rearrangement is a reaction in organic chemistry in which an α-diazocarbonyl compound is converted into a ketene by loss of dinitrogen with accompanying 1,2-rearrangement . The Wolff rearrangement yields a ketene as an intermediate product, which can undergo nucleophilic attack with weakly acidic nucleophiles such as water , alcohols , and amines , to generate carboxylic acid derivatives or undergo [2+2] cycloaddition reactions to form four-membered rings. [ 1 ] The mechanism of the Wolff rearrangement has been the subject of debate since its first use. No single mechanism sufficiently describes the reaction, and there are often competing concerted and carbene -mediated pathways; for simplicity, only the textbook, concerted mechanism is shown below. [ 2 ] The reaction was discovered by Ludwig Wolff in 1902. [ 3 ] The Wolff rearrangement has great synthetic utility due to the accessibility of α-diazocarbonyl compounds, variety of reactions from the ketene intermediate, and stereochemical retention of the migrating group. [ 2 ] However, the Wolff rearrangement has limitations due to the highly reactive nature of α-diazocarbonyl compounds, which can undergo a variety of competing reactions. [ 1 ] The Wolff rearrangement can be induced via thermolysis , [ 3 ] photolysis , [ 4 ] or transition metal catalysis. [ 3 ] In this last case, the reaction is sensitive to the transition metal; silver (I) oxide or other Ag(I) catalysts work well and are generally used. The Wolff rearrangement has been used in many total syntheses ; the most common use is trapping the ketene intermediate with nucleophiles to form carboxylic acid derivatives. The Arndt-Eistert homologation is a specific example of this use, wherein a carboxylic acid may be elongated by a methylene unit. Another common use is in ring-contraction methods; if the α-diazo ketone is cyclic , the Wolff rearrangement results in a ring-contracted product. The Wolff rearrangement works well in generating ring-strained systems, where other reactions may fail. In 1902, Wolff discovered that treating diazoacetophenone with silver (I) oxide and water resulted in formation of phenylacetic acid . Similarly, treatment with silver (I) oxide and ammonia formed phenylacetamide. [ 3 ] A few years later, in an independent study, Schröter observed similar results. [ 5 ] The reaction is occasionally called the Wolff-Schröter rearrangement. [ 2 ] The Wolff rearrangement was not commonly used until 20 years after it was discovered, as facile diazo ketone synthesis was unknown until the 1930s. [ 2 ] The reaction has proven useful in synthetic organic chemistry and many reviews have been published. [ 1 ] [ 2 ] The mechanistic pathway of the Wolff-rearrangement has been the subject of much debate, as there are often competing concerted and stepwise mechanisms. [ 2 ] However, two aspects of the mechanism can be agreed upon. First, α-diazocarbonyl compounds are in an equilibrium of s- cis and s- trans -conformers, the distribution of which may influence the mechanism of the reaction. Generally, under photolysis, compounds in the s- cis conformation react in a concerted manner due to the antiperiplanar relationship between the leaving and migrating groups, whereas compounds in the s- trans conformation react stepwise through a carbene intermediate or do not rearrange. Second, regardless of the reaction mechanism, the rearrangement gives a ketene intermediate, which can be trapped by a weakly acidic nucleophile, such as an alcohol or amine , to give the corresponding ester or amide, or an olefin, to give a [2+2] cycloaddition adduct. Strong acids do not rearrange, but rather protonate the α-carbon and give S N 2 products. Understanding the stereochemistry of α-diazo ketones is essential in elucidating the mechanism of the Wolff rearrangement. α-diazocarbonyl compounds are generally locally planar, with large rotational barriers (55–65 kJ/mol) due to C-C olefin character between the carbonyl and α-carbon , illustrated in the rightmost resonance structure. [ 6 ] Such a large barrier slows molecular rotations sufficiently to lead to an equilibrium between two conformers, an s- trans and s- cis -conformer. s- cis -Conformers are electronically favored due to Coulombic attraction between the oxygen with a partial negative charge and the cationic nitrogen, as seen in the rightmost resonance structure. [ 1 ] If R 1 is large and R 2 is hydrogen, s- cis is sterically favored. If R 1 and R 2 are large, s- trans is sterically favored; if both substituents are sufficiently large, the steric repulsion can outweigh the Coulombic attraction, leading to a preference for s- trans . Small and medium cyclic substrates are constrained in the s- cis conformation. When the α-diazo ketone is in the s- cis conformation, the leaving group (N 2 ) and the migrating group (R 1 ) are antiperiplanar, which favors a concerted mechanism, in which nitrogen extrusion occurs concurrently with 1,2-alkyl shift. There is evidence this mechanism occurs in both thermolytic and photolytic methods, when the s- cis -conformer is strongly favored. [ 7 ] CIDNP studies show that photochemical rearrangement of diazoacetone, which largely exists in the s- cis -conformer, is concerted. [ 8 ] Product ratios from direct and triplet-sensitized photolysis have been used as evidence for proposals that claim that concerted products arise from the s- cis -conformer and stepwise products occur through the s- trans -conformer. [ 9 ] s- trans -α-Diazo ketones do not have an antiperiplanar relationship between the leaving and migrating group, and thus are thought to generally rearrange stepwise. The stepwise mechanism begins with nitrogen extrusion, forming an α-ketocarbene. The α-ketocarbene can either undergo a 1,2-alkyl shift, to give the ketene product, or can undergo a 4π electrocyclic ring closure, to form an antiaromatic oxirene. This oxirene can reopen in two ways, to either α-ketocarbene, which can then form the ketene product. There are two primary arguments for stepwise mechanisms. The first is that rate constants of Wolff rearrangements depend on the stability of the formed carbene, rather than the migratory aptitude of the migrating group. [ 10 ] The most definitive evidence is isotopic scrambling of the ketene, as predicted by an oxirene intermediate, which can only occur in the stepwise path. In the scheme below, the red carbon is 13 C labelled. The symmetric oxirene intermediate can open either way, scrambling the 13 C label. If the substituents R 1 and R 2 are the same, one can quantify the ratio of products stemming from the concerted and stepwise mechanisms; if the substituents are different, the oxirene will have a preference in the direction it opens, and a ratio cannot be quantified, but any scrambling indicates some reactant is going through a stepwise mechanism. [ 1 ] In photolysis of diazo acetaldehyde, 8% of the label is scrambled, indicating that 16% of product is formed via the oxirene intermediate. [ 11 ] Under photolysis, the biphenyl (R 1 =R 2 =phenyl) substrate shows 20–30% label migration, implying 40–60% of product goes through the oxirene intermediate. [ 12 ] α-diazocyclohexanone shows no label scrambling under photolytic conditions, as it is entirely s- cis , and thus all substrate goes through the concerted mechanism, avoiding the oxirene intermediate. [ 13 ] Isotopic labeling studies have been used extensively to measure the ratio of product stemming from a concerted mechanism versus a stepwise mechanism. [ 14 ] These studies confirm that reactants that prefer s- trans conformations tend to undergo stepwise reaction. The degree of scrambling is also affected by carbene stability, migratory abilities, and nucleophilicity of solvent. The observation that the migratory ability of a substituent is inversely proportional to amount of carbene formed, indicates that under photolysis, there are competing pathways for many Wolff reactions. [ 14 ] The only Wolff rearrangements that show no scrambling are s- cis constrained cyclic α-diazo ketones. [ 13 ] Under both thermolytic and photolytic conditions, there exist competing concerted and stepwise mechanisms. Many mechanistic studies have been carried out, including conformational, sensitization, kinetic, and isotopic scrambling studies. These all point to competing mechanisms, with general trends. α-Diazo ketones that exist in the s- cis conformation generally undergo a concerted mechanism, whereas those in the s- trans conformation undergo a stepwise mechanism. [ 1 ] α-diazo ketones with better migratory groups prefer a concerted mechanism. [ 1 ] However, for all substrates except cyclic α-diazo ketones that exist solely in the s- cis conformation, products come from a combination of both pathways. [ 1 ] Transition metal mediated reactions are quite varied; however, they generally prefer forming the metal carbene intermediate. [ 2 ] The complete mechanism under photolysis can be approximated in the following figure: The mechanism of the Wolff rearrangement is dependent on the aptitude of the migratory group. Migratory abilities have been determined by competition studies. In general, hydrogen migrates the fastest, and alkyl and aryl groups migrate at approximately the same rate, with alkyl migrations favored under photolysis, and aryl migrations preferred under thermolysis. [ 15 ] Substituent effects on aryl groups are negligible, with the exception of NO 2 , which is a poor migrator. [ 15 ] In competition studies, electron deficient alkyl, aryl, and carbonyl groups cannot compete with other migrating groups, but are still competent. [ 16 ] [ 17 ] [ 18 ] Heteroatoms, in general, are poor migratory groups, because their ability to donate electron density from their p orbitals into the π* C=O bond decreases migratory ability. [ 1 ] The trend is as follows: [ 1 ] Photochemical reactions: H > alkyl ≥ aryl >> SR > OR ≥ NR2 Thermal reactions H > aryl ≥ alkyl (heteroatoms do not migrate) While known since 1902, the Wolff rearrangement did not become synthetically useful until the early 1930s, when efficient methods became available to synthesize α-diazocarbonyl compounds. The primary ways to prepare these substrates today are via the Arndt-Eistert procedure, the Franzen modification to the Dakin-West reaction , and diazo-transfer methods. The Arndt–Eistert reaction [ 19 ] involves the acylation of diazomethane with an acid chloride , to yield a primary α-diazo ketone. The carbon terminus of diazomethane adds to the carbonyl, to create a tetrahedral intermediate, which eliminates chloride. The chloride then deprotonates the intermediate to give the α-diazo ketone product. These α-diazo ketones are unstable under acidic conditions, as the α-carbon can be protonated by HCl and S N 2 displacement of nitrogen can occur by chloride. The Dakin–West reaction is a reaction of an amino acid with an acid anhydride in the presence of a base to form keto-amides. The Franzen modification [ 20 ] to the Dakin–West reaction [ 21 ] is a more effective way to make secondary α-diazo ketones. The Franzen modification nitrosates the keto-amide with N 2 O 3 in acetic acid , and the resulting product reacts with methoxide in methanol to give the secondary α-diazo ketone. Diazo-transfer reactions are commonly used methods, in which an organic azide , usually tosylazide, and an activated methylene (i.e. a methylene with two withdrawing groups) react in the presence of a base to give an α-diazo-1,3-diketone. [ 22 ] The base deprotonates the methylene, yielding an enolate , which reacts with tosylazide and subsequently decomposes in the presence of a weak acid, to give the α-diazo-1,3-diketone. The necessary requirement of two electron withdrawing groups makes this reaction one of limited scope. The scope can be broadened to substrates containing one electron withdrawing group by formylating a ketone via a Claisen condensation , followed by diazo-transfer and deformylative group transfer. [ 23 ] One of the greatest advantages of this method is its compatibility with unsaturated ketones. However, to achieve kinetic regioselectivity in enolate formation and greater compatibility with unsaturated carbonyls, one can induce enolate formation with lithium hexamethyldisilazide and subsequently trifluoroacylate rather than formylate. [ 24 ] Wolff rearrangements can be induced under thermolytic, [ 3 ] photolytic, [ 4 ] and transition-metal-catalyzed conditions. [ 3 ] Thermal conditions to induce rearrangement require heating to relatively high temperatures, of 180 ˚C, and thus have limited use. [ 3 ] Many Wolff rearrangement products are ring-strained and are susceptible to ring-open under high temperatures. In addition, S N 2 substitution of the diazo group at the α-carbon can take place at lower temperatures than rearrangement, which results in byproducts. The greatest use of thermal Wolff rearrangements is the formation of carboxylic acid analogs, by interception of the ketene with high boiling solvents, such as aniline and phenol . [ 3 ] Transition metals greatly lower the temperature of Wolff rearrangements, by stabilization of a metal-carbene intermediate. However, these carbenes can be so stable, as to not undergo rearrangement. Carbenes of rhodium , copper , and palladium are too stable and give non-Wolff products (primarily carbene insertion products). [ 2 ] The most commonly used metal catalyst is silver(I) oxide, although silver benzoate is also common. These reactions are generally run in the presence of a weak base, such as sodium carbonate or tertiary amines. [ 2 ] Whereas thermal and metal mediated Wolff rearrangements date back to 1902, [ 3 ] photolytic methods are somewhat newer, with the first example of a photolytic Wolff rearrangement reported in 1951. [ 4 ] α-diazo ketones have two absorption bands, an allowed π→π* transition at 240–270 nm, and a formally forbidden π→σ* transition at 270–310 nm. [ 4 ] Medium or low-pressure mercury arc lamps can excite these respective transitions. Triplet sensitizers result in non-Wolff carbene byproducts, and thus are not useful in synthetic applications of the Wolff rearrangement. [ 2 ] However, they have been used to probe the mechanism of the Wolff rearrangement. The Wolff rearrangement has a few retrons , depending on the reaction out of the ketene intermediate. A carboxylic acid derivative with an α-methylene group is a retron for an Arndt-Eistert type homologation. An acid in which the α-carbon belongs to a ring is a retron for a Wolff rearrangement ring contraction. In the Arndt-Eistert homologation reaction, a carboxylic acid and thionyl chloride are reacted to generate an acid chloride. The acid chloride then reacts with diazomethane (R 2 = H), or occasionally a diazoalkyl, via the Arndt-Eistert procedure, to generate an α-diazo ketone, which will undergo a metal-catalyzed or photolyzed Wolff rearrangement, to give a ketene. The ketene can be trapped with any weak acid, such as an alcohol or amine, to form the ester or amide. However, trapping with water, to form the acid is the most common form. In the most basic form, where R 2 = H, RXH=H 2 O, the reaction lengthens the alkyl chain of a carboxylic acid by a methylene. However, there is great synthetic utility in the variety of reactions one can carry out, by varying the diazoalkyl and weak acid. The migrating group, R 1 migrates with complete retention. [ 2 ] A very useful application of the Arndt-Eistert homologation forms the homologated aldehyde by either trapping the ketene with N-methyl aniline and reducing with lithium aluminium hydride , or trapping the ketene with ethanethiol and reducing with Raney nickel . [ 25 ] [ 26 ] There exist many hundreds of examples of the Arndt-Eistert homologation in the literature. [ 27 ] Prominent examples in natural product total synthesis include the syntheses of (−)-indolizidine and (+)-macbecin. [ 28 ] [ 29 ] A recent example of the Arndt-Eistert homologation is a step in the middle stage of Sarah Reisman's synthesis of (+)-salvileucalin B. [ 30 ] If the reactant is a cyclic α-diazo ketone, then Wolff-rearrangement products will be the one-carbon ring-contracted product. These reactions are generally concerted due to the s- cis conformation, and are photocatalyzed. The reaction below shows the concerted mechanism for the ring contraction of α-diazocyclohexanone, followed by trapping of the ketene with a weakly acidic nucleophile. The first known example is the ring contracted Wolff rearrangement product of α-diazocamphor, and subsequent kinetic hydration of the ketene from the more sterically accessible "endo" face, to give exo -1,5,5-trimethylbicyclo[2.1.1]hexane-6-carboxylic acid. [ 31 ] Ring contractions have been used extensively to build strained ring systems, as ring size does not impede the Wolff rearrangement, but often impedes other reactions. There are many examples where the Wolff rearrangement is used to contract cyclopentanone to cyclobutane. [ 32 ] The rearrangement is commonly used to form strained bicyclic and ring-fused systems. There exist a handful of examples of ring contractions from cyclobutanones to cyclopropanes. [ 33 ] The Wolff rearrangement is capable of contracting cyclohexanones to cyclopentanes, but is infrequently used to do so, because the Favorskii rearrangement accomplishes this transformation and the Wolff precursor is often more challenging to synthesize. [ 2 ] However, an example of a cyclohexanone ring contraction using deformylative diazo transfer, followed by a Wolff rearrangement, is Keiichiro Fukumoto's synthesis of (±)-∆ 9(12) -capnellene. [ 34 ] Ketene intermediates produced via the Wolff rearrangement are well known to undergo [2 + 2] thermal cycloadditions with olefins to form four-membered rings in both intermolecular and intramolecular reactions, examples of both are shown below. [ 35 ] [ 36 ] [ 37 ] Ketenes are able to undergo what is normally considered a forbidden [2 + 2] cycloaddition reaction because the ketene acts in an antarrafactial manner, leading to the Woodward-Hoffmann allowed [π s 2 + π a 2 ] cycloaddition. [ 36 ] Ketene [2 + 2] cycloadditions can be difficult reactions and give poor yields due to competing processes. The high energy aldoketene is very reactive and will cyclize with the diazo ketone starting material to produce butenolides and pyrazoles . [ 2 ] Ketene [2 + 2] cycloaddition reactions have been used in many total syntheses since Corey's use of the [2 + 2] cyclization in synthesizing the prostaglandins. [ 35 ] Robert Ireland's synthesis of (±)-aphidicolin uses the Wolff rearrangement to do a tandem ring-contraction, and [2 + 2] cycloaddition. [ 38 ] The Danheiser benzannulation photolyses α-diazo ketones and traps with an alkyne, which undergoes a pericyclic cascade, to ultimately form versatilely substituted phenols. [ 39 ] The first step in the benzannulation is the photolysis of an α-diazo ketone to form a vinylketene. The vinylketene then undergoes a [2 + 2] cycloaddition with an alkyne to form a 2-vinylcyclobutenone, which does a 4π electrocyclic ring-opening to generate a dienylketene. The dienylketene subsequently undergoes a 6π electrocyclic ring-closure followed by tautomerization, to form the phenolic benzannulated product. The vinylogous Wolff rearrangement consists of a β,γ-unsaturated diazo ketone undergoing a Wolff rearrangement, and a formal 1,3-shift of the CH 2 CO 2 R group. The vinylogous Wolff rearrangement yields a γ,δ-unsaturated carboxylic acid derivative, which is the same retron as for the Claisen rearrangement . The variant was discovered when it was noticed that thermolysis of 1-diazo-3,3,3-triarylpropan-2-ones gave unexpected isomeric products. [ 40 ] Copper (II) and rhodium (II) salts tend to give vinylogous Wolff rearranged products, and CuSO 4 and Rh 2 (OAc) 4 are the most commonly used catalysts. [ 41 ] This is because they promote metal carbene formation, which can add to the olefin to form a cyclopropane, which can reopen via a retro [2 + 2] to form a formally 1,3-shifted ketene (vis-à-vis a normal Wolff rearranged ketene), which can be trapped by a nucleophile to give the vinylogous Wolff product. [ 42 ]
https://en.wikipedia.org/wiki/Wolff_rearrangement
The Wolffenstein–Böters reaction is an organic reaction converting benzene to picric acid by a mixture of aqueous nitric acid and mercury(II) nitrate . [ 1 ] [ 2 ] [ 3 ] The reaction, which involves simultaneous nitration and oxidation , was first reported by the German chemists Richard Wolffenstein and Oskar Böters in 1906. [ 4 ] According to one series of studies the mercury nitrate first takes benzene to the corresponding nitroso compound and through the diazonium salt to the phenol . The presence of nitrite is essential for the reaction; picric acid formation is prevented when urea , a trap for nitrous acid, is added to the mixture. From then on the reaction proceeds as a regular aromatic nitration . [ 5 ] [ 6 ] A conceptually related reaction is the Bohn–Schmidt reaction , dating to 1889, which involves the hydroxylation of hydroxyanthraquinone with sulfuric acid and lead or selenium to a polyhydroxylated anthraquinone .
https://en.wikipedia.org/wiki/Wolffenstein–Böters_reaction
Wolffram’s Red Salt is an inorganic compound with the double salt formula [Pt(C 2 H 5 NH 2 ) 4 Cl 2 ] [Pt(C 2 H 5 NH 2 ) 4 ]Cl 4 ·4H 2 O. This compound is an early example of a one-dimensional coordination polymer , serving as a representative structure for studies in solid-state physics . This species has been of interest due to the unusual mixed valence system of Pt(II) and Pt(IV) bridged by a chlorine atom. The deep red color of the double salt, where the components were colorless, piqued the interest of early inorganic chemists and ultimately inspired studies into the physical properties of the compound in search of potential applications. In 1850, Charles-Adolphe Wurtz described a colorless platinum tetrammine with the formula [Pt(etn) 4 ]Cl 2 2H 2 O; Wolffram (H. Wolffram, Dissertation, Königsberg, 1900.), whom the compound is named after, obtained a red salt from this by action of hydrogen peroxide in hydrochloric acid , and initially considered it to be isomeric with Wurtz’s salt. With no known case of plato-tetrammine isomerism at the time, this prompted extensive discussion in the literature of the true nature and properties of Wolffram’s Red Salt. Reihlen and Flohr [ 1 ] demonstrated that Wolffram’s salt could be prepared directly by mixing aqueous solutions of the colorless [Pt(etn) 4 ]Cl 2 and its yellow analogue, [Pt(etn) 4 Cl 2 ]Cl 2 , where etn = NH 2 CH 2 CH 3 , leading to the most probable conclusion of the double salt formula, [Pt(C 2 H 5 NH 2 ) 4 Cl 2 ] [Pt(C 2 H 5 NH 2 ) 4 ]Cl 4 ·4H 2 O, compared with concurrently postulated explanations of tervalent platinum. [ 2 ] Early explanations for the deep red color of the salt were attributed to the special structure of the crystal lattice , [ 1 ] albeit with little explanation. While Drew & Tess [ 2 ] attempted to explain the deep color of this compound based on the assumption of a Pt(III) species, [ 1 ] Jensen established the diamagnetism of the compound and proved that it did not involve Pt(III). [ 3 ] Spectrochemical studies on the compound crystals concluded that the deep color of Wolffram’s salt crystals is due to the stacking of the “infinite chains” - linear Pt(II)/Pt(IV) stacked on top of each other. [ 4 ] In 1960, the crystal structure was shown to be consistent with the formulated double salt, [ 5 ] inspiring examinations of other analogues to compare and better understand this unique coordination pattern. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] Solid-state physical examinations were conducted to further elucidate the charge transfer across the mixed valence chain and potentially find use as semiconductors. X-ray scattering studies were performed, [ 11 ] [ 12 ] explicitly showing the mixed valence chain structure. Optical properties were probed, [ 13 ] as well as potential use as a photocatalyst, [ 14 ] albeit with disappointing results.
https://en.wikipedia.org/wiki/Wolffram's_red_salt
The Wolff–Kishner reduction is a reaction used in organic chemistry to convert carbonyl functionalities into methylene groups . [ 1 ] [ 2 ] In the context of complex molecule synthesis, it is most frequently employed to remove a carbonyl group after it has served its synthetic purpose of activating an intermediate in a preceding step. As such, there is no obvious retron for this reaction. The reaction was reported by Nikolai Kischner in 1911 [ 3 ] and Ludwig Wolff in 1912. [ 4 ] In general, the reaction mechanism first involves the in situ generation of a hydrazone by condensation of hydrazine with the ketone or aldehyde substrate. Sometimes it is however advantageous to use a pre-formed hydrazone as substrate (see modifications ). The rate determining step of the reaction is de-protonation of the hydrazone by an alkoxide base to form a diimide anion by a concerted, solvent mediated protonation/de-protonation step. Collapse of this alkyldiimide with loss of N 2 [ 2 ] leads to formation of an alkylanion which can be protonated by solvent to give the desired product. Because the Wolff–Kishner reduction requires highly basic conditions, it is unsuitable for base-sensitive substrates. In some cases, formation of the required hydrazone will not occur at sterically hindered carbonyl groups, preventing the reaction. However, this method can be superior to the related Clemmensen reduction for compounds containing acid-sensitive functional groups such as pyrroles and for high-molecular weight compounds. The Wolff–Kishner reduction was discovered independently by N. Kishner [ 3 ] in 1911 and Ludwig Wolff in 1912. [ 4 ] Kishner found that addition of pre-formed hydrazone to hot potassium hydroxide containing crushed platinized porous plate led to formation of the corresponding hydrocarbon. A review titled “Disability, Despotism, Deoxygenation—From Exile to Academy Member: Nikolai Matveevich Kizhner” describing the life and work of Kishner was published in 2013. [ 5 ] Wolff later accomplished the same result by heating an ethanol solution of semicarbazones or hydrazones in a sealed tube to 180 °C in the presence of sodium ethoxide. The method developed by Kishner has the advantage of avoiding the requirement of a sealed tube, but both methodologies suffered from unreliability when applied to many hindered substrates. These disadvantages promoted the development of Wolff’s procedure, wherein the use of high-boiling solvents such as ethylene glycol and triethylene glycol were implemented to allow for the high temperatures required for the reaction while avoiding the need of a sealed tube. [ 6 ] [ 7 ] These initial modifications were followed by many other improvements as described below. The mechanism of the Wolff–Kishner reduction has been studied by Szmant and coworkers. [ 8 ] [ 9 ] [ 10 ] [ 11 ] According to Szmant's research, the first step in this reaction is the formation of a hydrazone anion 1 by deprotonation of the terminal nitrogen by MOH. If semicarbazones are used as substrates, initial conversion into the corresponding hydrazone is followed by deprotonation. [ 4 ] A range of mechanistic data suggests that the rate-determining step involves formation of a new carbon–hydrogen bond at the carbon terminal in the delocalized hydrazone anion. This proton capture takes place in a concerted fashion with a solvent-induced abstraction of the second proton at the nitrogen terminal. Szmant’s finding that this reaction is first order in both hydroxide ion and ketone hydrazone supports this mechanistic proposal. [ 12 ] Several molecules of solvent have to be involved in this process in order to allow for a concerted process. A detailed Hammett analysis [ 8 ] of aryl aldehydes, methyl aryl ketones and diaryl ketones showed a non-linear relationship which the authors attribute to the complexity of the rate-determining step. Mildly electron-withdrawing substituents favor carbon-hydrogen bond formation, but highly electron-withdrawing substituents will decrease the negative charge at the terminal nitrogen and in turn favor a bigger and harder solvation shell that will render breaking of the N-H bond more difficult. The exceptionally high negative entropy of activation values observed can be explained by the high degree of organization in the proposed transition state. It was furthermore found that the rate of the reaction depends on the concentration of the hydroxylic solvent and on the cation in the alkoxide catalyst. The presence of crown ether in the reaction medium can increase the reactivity of the hydrazone anion 1 by dissociating the ion pair and therefore enhance the reaction rate. [ 11 ] The final step of the Wolff–Kishner reduction is the collapse of the dimide anion 2 in the presence of a proton source to give the hydrocarbon via loss of dinitrogen to afford an alkyl anion 3 , which undergoes rapid and irreversible acid-base reaction with solvent to give the alkane. Evidence for this high-energy intermediate was obtained by Taber via intramolecular trapping. The stereochemical outcome of this experiment was more consistent with an alkyl anion intermediate than the alternative possibility of an alkyl radical. [ 13 ] The overall driving force of the reaction is the evolution of nitrogen gas from the reaction mixture. Many of the efforts devoted to improve the Wolff–Kishner reduction have focused on more efficient formation of the hydrazone intermediate by removal of water and a faster rate of hydrazone decomposition by increasing the reaction temperature. [ 6 ] [ 7 ] Some of the newer modifications provide more significant advances and allow for reactions under considerably milder conditions. The table shows a summary of some of the modifications that have been developed since the initial discovery. In 1946, Huang Minlon reported a modified procedure for the Wolff–Kishner reduction of ketones in which excess hydrazine and water were removed by distillation after hydrazone formation. [ 14 ] [ 20 ] The temperature-lowering effect of water that was produced in hydrazone formation usually resulted in long reaction times and harsh reaction conditions even if anhydrous hydrazine was used in the formation of the hydrazone. The modified procedure consists of refluxing the carbonyl compound in 85% hydrazine hydrate with three equivalents of sodium hydroxide followed by distillation of water and excess hydrazine and elevation of the temperature to 200 °C. Significantly reduced reaction times and improved yields can be obtained using this modification. Minlon's original report described the reduction of β -( p -phenoxybenzoyl)propionic acid to γ -( p -phenoxyphenyl)butyric acid in 95% yield compared to 48% yield obtained by the traditional procedure. Nine years after Huang Minlon’s first modification, Barton developed a method for the reduction of sterically hindered carbonyl groups. [ 15 ] This method features rigorous exclusion of water, higher temperatures, and longer reaction times as well as sodium in diethylene glycol instead of alkoxide base. Under these conditions, some of the problems that normally arise with hindered ketones can be alleviated—for example, the C 11 -carbonyl group in the steroidal compound shown below was successfully reduced under Barton’s conditions while Huang–Minlon conditions failed to effect this transformation. Slow addition of preformed hydrazones to potassium tert -butoxide in DMSO as reaction medium instead of glycols allows hydrocarbon formation to be conducted successfully at temperatures as low as 23 °C. [ 16 ] Cram attributed the higher reactivity in DMSO as solvent to higher base strength of potassium tert -butoxide in this medium. This modification has not been exploited to great extent in organic synthesis due to the necessity to isolate preformed hydrazone substrates and to add the hydrazone over several hours to the reaction mixture. Henbest extended Cram’s procedure by refluxing carbonyl hydrazones and potassium tert -butoxide in dry toluene. [ 17 ] Slow addition of the hydrazone is not necessary and it was found that this procedure is better suited for carbonyl compounds prone to base-induced side reactions than Cram's modification. It has for example been found that double bond migration in α,β -unsaturated enones and functional group elimination of certain α -substituted ketones are less likely to occur under Henbest's conditions. [ 21 ] Treatment of tosylhydrazones with hydride-donor reagents to obtain the corresponding alkanes is known as the Caglioti reaction. [ 18 ] [ 22 ] The initially reported reaction conditions have been modified and hydride donors such as sodium cyanoborohydride , sodium triacetoxyborohydride , or catecholborane can reduce tosylhydrazones to hydrocarbons. [ 23 ] The reaction proceeds under relatively mild conditions and can therefore tolerate a wider array of functional groups than the original procedure. Reductions with sodium cyanoborohydride as reducing agent can be conducted in the presence of esters, amides, cyano-, nitro- and chloro-substituents. Primary bromo- and iodo-substituents are displaced by nucleophilic hydride under these conditions. The reduction pathway is sensitive to the pH, the reducing agent, and the substrate. [ 24 ] [ 25 ] One possibility, occurring under acidic conditions, includes direct hydride attack of iminium ion 1 following prior protonation of the tosylhydrazone. The resulting tosylhydrazine derivative 2 subsequently undergoes elimination of p -toluenesulfinic acid and decomposes via a diimine intermediate 3 to the corresponding hydrocarbon. A slight variation of this mechanism occurs when tautomerization to the azohydrazone is facilitated by inductive effects . The transient azohydrazine 4 can then be reduced to the tosylhydrazine derivative 2 and furnish the decarbonylated product analogously to the first possibility. This mechanism operates when relatively weak hydride donors are used, such as sodium cyanoborohydride . It is known that these sodium cyanoborohydride is not strong enough to reduce imines , but can reduce iminium ions. When stronger hydride donors are used, a different mechanism is operational, which avoids the use of acidic conditions. Hydride delivery occurs to give intermediate 5, followed by elimination of the metal sulfinate to give azo intermediate 6 . This intermediate then decomposes, with loss of nitrogen gas , to give the reduced compound. When strongly basic hydride donors are used such as lithium aluminium hydride , then deprotonation of the tosyl hydrazone can occur before hydride delivery. Intermediate anion 7 can undergo hydride attack, eliminating a metal sulfinate to give azo anion 8 . This readily decomposes to carbanion 9 , which is protonated to give the reduced product. As with the parent Wolff–Kishner reduction, the decarbonylation reaction can often fail due to unsuccessful formation of the corresponding tosylhydrazone. This is common for sterically hindered ketones, as was the case for the cyclic amino ketone shown below. [ 26 ] Alternative methods of reduction can be employed when formation of the hydrazone fail, including thioketal reduction with Raney nickel or reaction with sodium triethylborohydride . α,β -Unsaturated carbonyl tosylhydrazones can be converted into the corresponding alkenes with migration of the double bond. The reduction proceeds stereoselectively to furnish the E geometric isomer. [ 27 ] A very mild method uses one equivalent of catecholborane to reduce α,β -unsaturated tosylhydrazones. [ 28 ] The mechanism of NaBH 3 CN reduction of α,β -unsaturated tosylhydrazones has been examined using deuterium-labeling. Alkene formation is initiated by hydride reduction of the iminium ion followed by double bond migration and nitrogen extrusion which occur in a concerted manner. [ 29 ] Allylic diazene rearrangement as the final step in the reductive 1,3-transposition of α,β -unsaturated tosylhydrazones to the reduced alkenes can also be used to establish sp 3 -stereocenters from allylic diazenes containing prochiral stereocenters. The influence of the alkoxy stereocenter results in diastereoselective reduction of the α,β -unsaturated tosylhydrazone. [ 30 ] The authors predicted that diastereoselective transfer of the diazene hydrogen to one face of the prochiral alkene could be enforced during the suprafacial rearrangement. In 2004, Myers and coworkers developed a method for the preparation of N-tert -butyldimethylsilylhydrazones from carbonyl-containing compounds. [ 19 ] These products can be used as a superior alternative to hydrazones in the transformation of ketones into alkanes. The advantages of this procedure are considerably milder reaction conditions and higher efficiency as well as operational convenience. The condensation of 1,2-bis( tert -butyldimethylsilyl)-hydrazine with aldehydes and ketones with Sc(OTf) 3 as catalyst is rapid and efficient at ambient temperature. Formation and reduction of N-tert -butyldimethylsilylhydrazones can be conducted in a one pot procedure in high yield. [This graphic is wrong. It should be TBS-N, not TBSO-N] The newly developed method was compared directly to the standard Huang–Minlon Wolff–Kishner reduction conditions (hydrazine hydrate, potassium hydroxide, diethylene glycol, 195 °C) for the steroidal ketone shown above. The product was obtained in 79% yield compared to 91% obtained from the reduction via an intermediate N-tert -butyldimethylsilylhydrazone. The Wolff–Kishner reduction is not suitable for base–sensitive substrates and can under certain conditions be hampered by steric hindrance surrounding the carbonyl group. Some of the more common side-reactions are listed below. A commonly encountered side-reaction in Wolff–Kishner reductions involves azine formation by reaction of hydrazone with the carbonyl compound. Formation of the ketone can be suppressed by vigorous exclusion of water during the reaction. Several of the presented procedures require isolation of the hydrazone compound prior to reduction. This can be complicated by further transformation of the product hydrazone to the corresponding hydrazine during product purification. Cram found that azine formation is favored by rapid addition of preformed hydrazones to potassium tert -butoxide in anhydrous dimethylsulfoxide. [ 16 ] The second principal side reaction is the reduction of the ketone or aldehyde to the corresponding alcohol. After initial hydrolysis of the hydrazone, the free carbonyl derivative is reduced by alkoxide to the carbinol. In 1924, Eisenlohr reported that substantial amounts of hydroxydecalin were observed during the attempted Wolff–Kishner reduction of trans-β -decalone. [ 31 ] In general, alcohol formation may be repressed by exclusion of water or by addition of excess hydrazine. Kishner noted during his initial investigations that in some instances, α -substitution of a carbonyl group can lead to elimination affording unsaturated hydrocarbons under typical reaction conditions. Leonard later further developed this reaction and investigated the influence of different α -substituents on the reaction outcome. [ 21 ] [ 32 ] He found that the amount of elimination increases with increasing steric bulk of the leaving group. Furthermore, α -dialkylamino-substituted ketones generally gave a mixture of reduction and elimination product whereas less basic leaving groups resulted in exclusive formation of the alkene product. The fragmentation of α,β -epoxy ketones to allylic alcohols has been extended to a synthetically useful process and is known as the Wharton reaction . [ 33 ] Grob rearrangement of strained rings adjacent to the carbonyl group has been observed by Erman and coworkers. [ 34 ] During an attempted Wolff–Kishner reduction of trans-π -bromocamphor under Cram’s conditions, limonene was isolated as the only product. Similarly, cleavage of strained rings adjacent to the carbonyl group can occur. When 9 β ,19-cyclo-5 α -pregnane-3,11,20-trione 3,20-diethylene ketal was subjected to Huang–Minlon conditions, ring-enlargement was observed instead of formation of the 11-deoxo-compound. [ 35 ] The Wolff–Kishner reduction has been applied to the total synthesis of scopadulcic acid B, [ 36 ] aspidospermidine [ 37 ] [ 38 ] and dysidiolide. [ 39 ] The Huang Minlon modification of the Wolff–Kishner reduction is one of the final steps in their synthesis of (±)-aspidospermidine. The carbonyl group that was reduced in the Wolff–Kishner reduction was essential for preceding steps in the synthesis. The tertiary amide was stable to the reaction conditions and reduced subsequently by lithium aluminum hydride. [ 38 ] Amides are usually not suitable substrates for the Wolff–Kishner reduction as demonstrated by the example above. Coe and coworkers found however that a twisted amide can be efficiently reduced under Wolff–Kishner conditions. [ 40 ] The authors explain this observation with the stereoelectronic bias of the substrate which prevents “ anti–Bredt ” iminium ion formation and therefore favors ejection of alcohol and hydrazone formation. The amide functionality in this strained substrate can be considered as isolated amine and ketone functionalities as resonance stabilization is prevented due to torsional restrictions. The product was obtained in 68% overall yield in a two step procedure. A tricyclic carbonyl compound was reduced using the Huang Minlon modification of the Wolff–Kishner reduction. [ 41 ] Several attempts towards decarbonylation of tricyclic allylic acetate containing ketone failed and the acetate functionality had to be removed to allow Wolff–Kishner reduction. Finally, the allylic alcohol was installed via oxyplumbation. The Wolff–Kishner reduction has also been used on kilogram scale for the synthesis of a functionalized imidazole substrate. Several alternative reduction methods were investigated, but all of the tested conditions remained unsuccessful. Safety concerns for a large scale Wolff–Kishner reduction were addressed and a highly optimized procedure afforded to product in good yield. [ 42 ] An allylic diazene rearrangement was used in the synthesis of the C 21 –C 34 fragment of antascomicin B. [ 43 ] The hydrazone was reduced selectively with catecholborane and excess reducing agent decomposed with sodium thiosulfate. The crude reaction product was then treated with sodium acetate and to give the 1,4- syn isomer.
https://en.wikipedia.org/wiki/Wolff–Kishner_reduction
Wolfgang Kautek is an Austrian Physical chemist and the head of the Physical chemistry department at the University of Vienna . [ 1 ] He is the President of the Erwin Schrödinger Society for Nanosciences (ESG) [ 2 ] and the Chairman of the Research Group "Physical Chemistry" at the Austrian Chemical Society (GÖCh). [ 3 ] This biographical article about a chemist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wolfgang_Kautek
Wolfgang Krull (26 August 1899 – 12 April 1971) was a German mathematician who made fundamental contributions to commutative algebra , introducing concepts that are now central to the subject. Krull was born and went to school in Baden-Baden . He attended the Universities of Freiburg , Rostock and finally Göttingen from 1919–1921, [ 1 ] where he earned his doctorate under Alfred Loewy . He worked as an instructor and professor at Freiburg, then spent a decade at the University of Erlangen . In 1939, Krull moved to become chair at the University of Bonn , where he remained for the rest of his life. Wolfgang Krull was a member of the Nazi Party . [ 2 ] His 35 doctoral students include Wilfried Brauer , Karl-Otto Stöhr and Jürgen Neukirch . This article about a German mathematician is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wolfgang_Krull
Wolfgang Konrad Spohn (born 20 March 1950, in Tübingen ) is a German philosopher. He is professor of philosophy and philosophy of science at the University of Konstanz . Wolfgang Spohn studied philosophy, logic and philosophy of science and mathematics at the University of Munich and acquired there the MA (1973) and the PhD (1976) with a thesis on the Grundlagen der Entscheidungstheorie . In his time as an assistant professor he earned the habilitation (1984) with a thesis about Eine Theorie der Kausalität . He held professorships at the University of Regensburg (1986–91), the University of Bielefeld (1991–96), and the University of Konstanz (1996–2018). Since 2019 he is senior professor at the University of Tübingen . Spohn is editor of the philosophical journal Erkenntnis and was its editor-in-chief from 1988 to 2001. He is a founding member of the Gesellschaft für Analytische Philosophie and was its vice-president from 2006 to 2012. He was a Fellow at the Wissenschaftskolleg zu Berlin (1985/86) and a Distinguished Visiting Professor at the University of California, Irvine (1988). Since 2002 he is a member of the Deutsche Akademie der Naturforscher Leopoldina [ 1 ] and since 2015 a member of the Academia Europaea . In 2012 he was the first outside Anglosaxon academia to win the Lakatos Award of the London School of Economics for his book The Laws of Belief. Ranking Theory and its Philosophical Applications . In 2015 he received the Frege Prize of the Gesellschaft für Analytische Philosophie for outstanding achievements of a German speaking philosopher in the field of analytic philosophy. In 2023 he received an honorary doctorate from the University of Munich . [ 2 ] Spohn was speaker of two DFG Research Units Logic in Philosophy (1997–2003) and What if? (2012-2018), PI of the Collaborative Research Center 471 Variation in the Lexicon (2000–2008) und co-initiator of the DFG Priority Program SPP 1516 New Frameworks of Rationality (2011–2018), from which the interdisciplinary, philosophical-psychological Handbook of Rationality (MIT Press 2021), co-edited with Markus Knauff, emerged. Since 2019 he is PI of the Excellence Cluster EXC 2064 Machine Learning: New Perspectives for Science, and since 2020 he is directing his Reinhart Koselleck Project Reflexive Decision and Game Theory. Spohn is the youngest brother of the historical sociologist Willfried Spohn [ de ] and of the mathematical physicist Herbert Spohn . Spohn is best known for his contributions to formal epistemology , in particular for comprehensively developing ranking theory [ 3 ] since 1982, which is his theory of the dynamics of belief. It is an alternative to probability theory and has similarly great philosophical significance for many related epistemological topics (such as the problem of induction and the theory of causation ). Spohn's research extends to philosophy of science , metaphysics and ontology , philosophy of language and mind , two-dimensional semantics , philosophical logic , and decision and game theory (see the collection of papers [ 4 ] ). His dissertation [ 5 ] and his paper "Stochastic Independence, Causal Independence, and Shieldability" [ 6 ] are precursors of the theory of Bayesian networks and their causal interpretation, which became the dominating statistical theory of causality after 1990. His paper How to Make Sense of Game Theory [ 7 ] is a forerunner of epistemic game theory, which developed into an important branch of game theory. The novel game theoretic concept of a dependency equilibrium, which generalizes the basic notion of a Nash equilibrium , is first introduced in his paper Dependency Equilibria and the Causal Structure of Decision and Game Situations .
https://en.wikipedia.org/wiki/Wolfgang_Spohn
Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allows machine learning , statistics , symbolic computation , data manipulation, network analysis, time series analysis, NLP , optimization , plotting functions and various types of data, implementation of algorithms , creation of user interfaces , and interfacing with programs written in other programming languages . It was conceived by Stephen Wolfram , and is developed by Wolfram Research of Champaign, Illinois. [ 8 ] [ 9 ] The Wolfram Language is the programming language used in Mathematica . [ 10 ] Mathematica 1.0 was released on June 23, 1988 in Champaign, Illinois and Santa Clara, California . [ 11 ] [ 12 ] [ 13 ] Mathematica's Wolfram Language is fundamentally based on Lisp; for example, the Mathematica command Most is identically equal to the Lisp command butlast. There is a substantial literature on the development of computer algebra systems (CAS). Mathematica is split into two parts: the kernel and the front end . The kernel interprets expressions (Wolfram Language code) and returns result expressions, which can then be displayed by the front end. The original front end, designed by Theodore Gray [ 14 ] in 1988, consists of a notebook interface and allows the creation and editing of notebook documents that can contain code, plaintext, images, and graphics. [ 15 ] Code development is also supported through support in a range of standard integrated development environment (IDE) including Eclipse , [ 16 ] IntelliJ IDEA , [ 17 ] Atom, Vim , Visual Studio Code and Git . The Mathematica Kernel also includes a command line front end. [ 18 ] Other interfaces include JMath, [ 19 ] based on GNU Readline and WolframScript [ 20 ] which runs self-contained Mathematica programs (with arguments) from the UNIX command line. Capabilities for high-performance computing were extended with the introduction of packed arrays in version 4 (1999) [ 21 ] and sparse matrices (version 5, 2003), [ 22 ] and by adopting the GNU Multiple Precision Arithmetic Library to evaluate high-precision arithmetic. Version 5.2 (2005) added automatic multi-threading when computations are performed on multi-core computers. [ 23 ] This release included CPU-specific optimized libraries. [ 24 ] In addition Mathematica is supported by third party specialist acceleration hardware such as ClearSpeed . [ 25 ] In 2002, gridMathematica was introduced to allow user level parallel programming on heterogeneous clusters and multiprocessor systems [ 26 ] and in 2008 parallel computing technology was included in all Mathematica licenses including support for grid technology such as Windows HPC Server 2008 , Microsoft Compute Cluster Server and Sun Grid . Support for CUDA and OpenCL GPU hardware was added in 2010. [ 27 ] As of Version 14, there are 6,602 built-in functions and symbols in the Wolfram Language. [ 28 ] Stephen Wolfram announced the launch of the Wolfram Function Repository in June 2019 as a way for the public Wolfram community to contribute functionality to the Wolfram Language. [ 29 ] At the time of Stephen Wolfram's release announcement for Mathematica 13, there were 2,259 functions contributed as Resource Functions. [ 30 ] In addition to the Wolfram Function Repository, there is a Wolfram Data Repository with computable data and the Wolfram Neural Net Repository for machine learning. [ 31 ] Wolfram Mathematica is the basis of the Combinatorica package, which adds discrete mathematics functionality in combinatorics and graph theory to the program. [ 32 ] Communication with other applications can be done using a protocol called Wolfram Symbolic Transfer Protocol (WSTP). It allows communication between the Wolfram Mathematica kernel and the front end and provides a general interface between the kernel and other applications. [ 33 ] Wolfram Research freely distributes a developer kit for linking applications written in the programming language C to the Mathematica kernel through WSTP using J/Link., [ 34 ] a Java program that can ask Mathematica to perform computations. Similar functionality is achieved with .NET /Link, [ 35 ] but with .NET programs instead of Java programs. Other languages that connect to Mathematica include Haskell , [ 36 ] AppleScript , [ 37 ] Racket , [ 38 ] Visual Basic , [ 39 ] Python , [ 40 ] [ 41 ] and Clojure . [ 42 ] Mathematica supports the generation and execution of Modelica models for systems modeling and connects with Wolfram System Modeler . Links are also available to many third-party software packages and APIs. [ 43 ] Mathematica can also capture real-time data from a variety of sources [ 44 ] and can read and write to public blockchains ( Bitcoin , Ethereum , and ARK). [ 45 ] It supports import and export of over 220 data, image, video, sound, computer-aided design (CAD), geographic information systems (GIS), [ 46 ] document, and biomedical formats. In 2019, support was added for compiling Wolfram Language code to LLVM . [ 47 ] Version 12.3 of the Wolfram Language added support for Arduino . [ 48 ] Mathematica is also integrated with Wolfram Alpha , an online answer engine that provides additional data, some of which is kept updated in real time, for users who use Mathematica with an internet connection. Some of the data sets include astronomical, chemical, geopolitical, language, biomedical, airplane, and weather data, in addition to mathematical data (such as knots and polyhedra). [ 49 ] BYTE in 1989 listed Mathematica as among the "Distinction" winners of the BYTE Awards, stating that it "is another breakthrough Macintosh application ... it could enable you to absorb the algebra and calculus that seemed impossible to comprehend from a textbook". [ 50 ] Mathematica has been criticized for being closed source. [ 51 ] Wolfram Research claims keeping Mathematica closed source is central to its business model and the continuity of the software. [ 52 ] [ 53 ]
https://en.wikipedia.org/wiki/Wolfram_Mathematica
Wollaston wire is a very fine (c. 0.001 mm thick) platinum wire clad in silver and used in electrical instruments. For most uses, the silver cladding is etched away by acid to expose the platinum core. The wire is named after its inventor, William Hyde Wollaston , who first produced it in England in the early 19th century. [ 1 ] Platinum wire is drawn through successively smaller dies until it is about 0.003 inches (0.076 mm, 40 AWG ) in diameter. It is then embedded in the middle of a silver wire having a diameter of about 0.100 inches (2.5 mm, 10 AWG). This composite wire is then drawn until the silver wire has a diameter of about 0.002 inches (0.051 mm, 44 AWG), causing the embedded platinum wire to be reduced by the same 50:1 ratio to a final diameter of 0.00006 inches (1.5 μm, 74 AWG). Removal of the silver coating with an acid bath leaves the fine platinum wire as a product of the process. Wollaston wire was used in early radio detectors known as electrolytic detectors [ 2 ] and the hot wire barretter . Other uses include suspension of delicate devices, sensing of temperature, and sensitive electrical power measurements. It continues to be used for the fastest-responding hot-wire anemometers .
https://en.wikipedia.org/wiki/Wollaston_wire
In mathematics , Wolstenholme's theorem states that for a prime number p ≥ 5, the congruence holds, where the parentheses denote a binomial coefficient . For example, with p = 7, this says that 1716 is one more than a multiple of 343. The theorem was first proved by Joseph Wolstenholme in 1862. In 1819, Charles Babbage showed the same congruence modulo p 2 , which holds for p ≥ 3. An equivalent formulation is the congruence for p ≥ 5, which is due to Wilhelm Ljunggren [ 1 ] (and, in the special case b = 1, to J. W. L. Glaisher [ citation needed ] ) and is inspired by Lucas's theorem . No known composite numbers satisfy Wolstenholme's theorem and it is conjectured that there are none (see below). A prime that satisfies the congruence modulo p 4 is called a Wolstenholme prime (see below). As Wolstenholme himself established, his theorem can also be expressed as a pair of congruences for (generalized) harmonic numbers : since (Congruences with fractions make sense, provided that the denominators are coprime to the modulus.) For example, with p = 7, the first of these says that the numerator of 49/20 is a multiple of 49, while the second says the numerator of 5369/3600 is a multiple of 7. A prime p is called a Wolstenholme prime iff the following condition holds: If p is a Wolstenholme prime , then Glaisher's theorem holds modulo p 4 . The only known Wolstenholme primes so far are 16843 and 2124679 (sequence A088164 in the OEIS ); any other Wolstenholme prime must be greater than 10 11 . [ 2 ] This result is consistent with the heuristic argument that the residue modulo p 4 is a pseudo-random multiple of p 3 . This heuristic predicts that the number of Wolstenholme primes between K and N is roughly ln ln N − ln ln K . The Wolstenholme condition has been checked up to 10 11 , and the heuristic says that there should be roughly one Wolstenholme prime between 10 11 and 10 24 . A similar heuristic predicts that there are no "doubly Wolstenholme" primes, for which the congruence would hold modulo p 5 . There is more than one way to prove Wolstenholme's theorem. Here is a proof that directly establishes Glaisher's version using both combinatorics and algebra. For the moment let p be any prime, and let a and b be any non-negative integers. Then a set A with ap elements can be divided into a rings of length p , and the rings can be rotated separately. Thus, the a -fold direct sum of the cyclic group of order p acts on the set A , and by extension it acts on the set of subsets of size bp . Every orbit of this group action has p k elements, where k is the number of incomplete rings, i.e., if there are k rings that only partly intersect a subset B in the orbit. There are ( a b ) {\displaystyle \textstyle {a \choose b}} orbits of size 1 and there are no orbits of size p . [ 3 ] Thus we first obtain Babbage's theorem Examining the orbits of size p 2 , we also obtain Among other consequences, this equation tells us that the case a = 2 and b = 1 implies the general case of the second form of Wolstenholme's theorem. Switching from combinatorics to algebra, both sides of this congruence are polynomials in a for each fixed value of b . The congruence therefore holds when a is any integer, positive or negative, provided that b is a fixed positive integer. In particular, if a = −1 and b = 1, the congruence becomes This congruence becomes an equation for ( 2 p p ) {\displaystyle \textstyle {2p \choose p}} using the relation When p is odd, the relation is When p ≠ 3, we can divide both sides by 3 to complete the argument. A similar derivation modulo p 4 establishes that for all positive a and b if and only if it holds when a = 2 and b = 1, i.e., if and only if p is a Wolstenholme prime. It is conjectured that if when k = 3, then n is prime. The conjecture can be understood by considering k = 1 and 2 as well as 3. When k = 1, Babbage's theorem implies that it holds for n = p 2 for p an odd prime, while Wolstenholme's theorem implies that it holds for n = p 3 for p > 3, and it holds for n = p 4 if p is a Wolstenholme prime. When k = 2, it holds for n = p 2 if p is a Wolstenholme prime. These three numbers, 4 = 2 2 , 8 = 2 3 , and 27 = 3 3 are not held for ( 1 ) with k = 1, but all other prime square and prime cube are held for ( 1 ) with k = 1. Only 5 other composite values (neither prime square nor prime cube) of n are known to hold for ( 1 ) with k = 1, they are called Wolstenholme pseudoprimes , they are The first three are not prime powers (sequence A228562 in the OEIS ), the last two are 16843 4 and 2124679 4 , 16843 and 2124679 are Wolstenholme primes (sequence A088164 in the OEIS ). Besides, with an exception of 16843 2 and 2124679 2 , no composites are known to hold for ( 1 ) with k = 2, much less k = 3. Thus the conjecture is considered likely because Wolstenholme's congruence seems over-constrained and artificial for composite numbers. Moreover, if the congruence does hold for any particular n other than a prime or prime power, and any particular k , it does not imply that The number of Wolstenholme pseudoprimes up to x is O ( x 1 / 2 log ⁡ ( log ⁡ ( x ) ) 499712 ) {\displaystyle O(x^{1/2}\log(\log(x))^{499712})} , so the sum of reciprocals of those numbers converges. The constant 499712 follows from the existence of only three Wolstenholme pseudoprimes up to 10 12 . The number of Wolstenholme pseudoprimes up to 10 12 should be at least 7 if the sum of its reciprocals diverged, and since this is not satisfied because there are only 3 of them in this range, the counting function of these pseudoprimes is at most O ( x 1 / 2 log ⁡ ( log ⁡ ( x ) ) C ) {\displaystyle O(x^{1/2}\log(\log(x))^{C})} for some efficiently computable constant C ; we can take C as 499712. The constant in the big O notation is also effectively computable in O ( x 1 / 2 log ⁡ ( log ⁡ ( x ) ) 499712 ) {\displaystyle O(x^{1/2}\log(\log(x))^{499712})} . Leudesdorf has proved that for a positive integer n coprime to 6, the following congruence holds: [ 4 ] In 1900, Glaisher [ 5 ] [ 6 ] showed further that: for prime p > 3, Where B n is the Bernoulli number .
https://en.wikipedia.org/wiki/Wolstenholme's_theorem
In plasma physics , Woltjer's theorem states that force-free magnetic fields in a closed system with constant force-free parameter α {\displaystyle \alpha } represent the state with lowest magnetic energy in the system and that the magnetic helicity is invariant under this condition. It is named after Lodewijk Woltjer who derived it in 1958. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] A force-free magnetic field with flux density B {\displaystyle \mathbf {B} } satisfies where α {\displaystyle \alpha } is a scalar function that is constant along field lines. The helicity H {\displaystyle {\mathcal {H}}} invariant is given by where H {\displaystyle {\mathcal {H}}} is related to B = ∇ × A {\displaystyle \mathbf {B} =\nabla \times \mathbf {A} } through the vector potential A {\displaystyle \mathbf {A} } as below This plasma physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Woltjer's_theorem
Women Who Code (WWCode) was an international non-profit organization that provides services for women pursuing technology careers and a job board [ 2 ] for companies seeking coding professionals. The company aims to provide an avenue into the technology world by evaluating and assisting women in developing technical skills. In addition to training, professional evaluations, meetings, and scholarships, Women Who Code offers networking and mentorship. As of 2023, the organization has held more than 16,000 free events around the world and built a membership of over 343,000 people representing over 147 countries. [ 3 ] The current chief executive officer of Women Who Code is Julie Elberfeld . Women Who Code was created in 2011. [ 4 ] It was founded as a 501(c)(3) not-for-profit and approved by the IRS in November 2013 [ 5 ] and is best known for its weekly publication the CODE Review, free technical study groups, hack nights, career development and leadership development, and speaking events featuring influential technology industry experts and investors. [ 6 ] Since inception, WWCode has produced thousands of events worldwide and garnered sponsorship from organizations like Google , Zendesk , VMware , KPCB , Capital One , Nike , Yelp , and many others. In the summer of 2016, Women Who Code went through Y Combinator . [ 7 ] On April 18, 2024, the organization announced it would be shutting down due to lack of funding. [ 8 ] [ 9 ] Women Who Code's initiatives include: [ 10 ]
https://en.wikipedia.org/wiki/Women_Who_Code
Women Who Weld is a nonprofit organization based in Detroit, Michigan . Women Who Weld teaches women how to weld and find employment in the welding industry through intensive and introductory welding training programs, including Week-Long Intensive Welding Training Classes and Single-Day Introductory Workshops. [ 1 ] Women Who Weld was founded by Samantha Farrugia, [ 2 ] who learned how to weld while attending the University of Michigan for her master's degree in the Taubman College of Architecture and Urban Planning. [ 1 ] The organization has been featured in The Atlantic , [ 1 ] Architectural Digest , [ 2 ] the Detroit Free Press , [ 3 ] the Record-Eagle , [ 4 ] InStyle Magazine , [ 5 ] and more. This article about a mechanical engineering topic is a stub . You can help Wikipedia by expanding it . This article related to a non-profit organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Women_Who_Weld
Women and Families for Defence was a Conservative -aligned pressure group originally founded in March 1983 [ 1 ] as Women for Defence . [ 2 ] It was founded in opposition to the Greenham Common Women's Peace Camp [ 3 ] and the Campaign for Nuclear Disarmament , [ 4 ] and aimed to oppose arguments in favour of unilateral nuclear disarmament . It was reportedly founded by Lady Olga Maitland , Ann Widdecombe , [ 5 ] [ 6 ] Virginia Bottomley [ 5 ] and Angela Rumbold [ 7 ] (who also became vice-chairwoman of the organization [ 8 ] ). However, Alfred Sherman told the Sunday Times that it was Maitland who 'solely' set up the group, with his help. [ 9 ] The Viscount Trenchard , the former Minister for Defence Procurement , became its president. [ 10 ] [ 11 ] The group had its own magazine, Deter , and received a commendation from the U.S. president, Ronald Reagan . [ 12 ] The group held its first public meeting on 1 May 1983 in Trafalgar Square , whereupon 150 members of the group met, sang " Land of Hope and Glory " and argued in favour of a nuclear deterrent as a precursor to multilateral nuclear disarmament. The group also delivered a petition signed by 13,000 people to respond to the proposals of the West for missile reductions. [ 13 ] In 1986, it was expelled from a council that was organising events to mark the International Year of Peace that year. [ 10 ] Maitland later turned the group into a general anti- Labour political canvassing group, Women and Families for Canvassing . [ 14 ] This article about an organisation in the United Kingdom is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Women_and_Families_for_Defence
Women in Cell Biology (WCIB) is a subcommittee of the American Society for Cell Biology (ASCB) created to promote women in cell biology and present awards. A group of women were unhappy with the lack of recognition in ASCB .  In 1971, Virginia Walbot gathered a group of women to meet at the annual ASCB meetings and WICB began.  The goal was to provide a space for women to talk and network with other women in the field, learn about job opportunities, and promote women in academia.  Newsletters were distributed containing job listings and news of powerful women in biology.  Originally, WICB was not accepted by ASCB; the newsletter was not funded and later discontinued in the 1970s. [ 1 ] WICB was established as a committee within ASCB in 1994. Currently, WICB meets annually at ASCB meetings and has a column in the ASCB newsletter. [ 2 ] The goals of WICB are to nominate and give awards and communicate through the newsletter. [ 3 ] WICB awards the following annually: [ 4 ]
https://en.wikipedia.org/wiki/Women_in_Cell_Biology
The role of women in and affiliated with NASA has varied over time. As early as 1922 women were working as physicists and in other technical positions. [1] Throughout the 1930s to the present, more women joined the NASA teams not only at Langley Memorial, but at the Jet Propulsion Laboratory, the Glenn Research Center, and other numerous NASA sites throughout the United States. [2] As the space program has grown, women have advanced into many roles, including astronauts. [ 1 ] [ 2 ] [ 3 ] As early as 1922 women like Pearl I. Young were working as physicists and other technical positions. Young was the second female physicist working for the federal government at the National Advisory Committee for Aeronautics (NACA), at Langley Memorial Aeronautical Laboratory building 1202 in Langley, Virginia. [ 4 ] Women first worked in support as administrators, secretaries, doctors, psychologists, and later engineers. In the 1960s, NASA started recruiting women and minorities for the space program. By the end of the 1960s, NASA had employed thousands of women. [ 5 ] Some of the women like Mary Shep Burton , Gloria B. Martinez (the first Spanish woman hired), Cathy Osgood , and Shirley Hunt worked in the computer division while Sue Erwin , Lois Ransdell , and Maureen Bowen worked as secretaries for various members of the Mission and Flight Control teams. [ 6 ] Dana Ulery was the first woman engineer to be hired at the Jet Propulsion Laboratory (JPL) of NASA. [ 5 ] Although she was only considered as a junior engineer, for more than seven years, no woman engineer got into JPL besides Ulery. [ 5 ] Another woman, Donna Shirley , worked in JPL as a mission engineer in the 1960s. [ 5 ] Also, Dr. Carolyn Huntoon , a woman, was a pioneer in researching astronaut metabolisms and other body systems. [ 7 ] Margaret Hamilton was the guidance computer lead programmer for the Apollo program . Judy Sullivan was the lead biomedical engineer for the Apollo 11 mission. Although woman had a difficult time establishing themselves within the organization, NASA did have some women who charted unknown territory throughout the time period. For example, Katherine Johnson was one of the most prolific figures in NASA history. Johnson worked through the ranks as a black woman and made it as one of the top and most respected engineers on the Apollo mission. This was seen as a major step for blacks and women throughout NASA and the general public for others to look up to. Along with Katherine Johnson, who ended up playing a pivotal role as a computer for NASA, Dorothy Vaughan and Mary Jackson helped calculate integral equations and mathematical calculations to recheck and assure that the launching of spacecraft was calculated correctly. Overall, these figures stood as pioneers to the growing commonality of women working for NASA. [1] However, not everyone was accepting of this phenomenon. In 1962, George Low , NASA's Chief of Manned Spaceflight, fought against women by telling the congress that working with women would delay his work. Meanwhile, in the same year, John F. Kennedy signed the President's Commission on the Status of Women to encourage gender equality in the workforce. This eventually led to James Webb, a NASA administrator, creating an agency-wide policy directive stating that NASA provides equal opportunities for all kinds of people willing to work with NASA. [ 7 ] Despite this, no women were selected to join the astronaut corps in 1963/65/66/67. [ 7 ] The 1970s was a stepping stone that lead women a step closer to becoming astronauts. At the same time, the military began accepting women for pilot training that eventually led to women astronauts. [ 8 ] In 1977, the recruitment of NASA skyrocketed because of Nichelle Nichols 's help. [ 9 ] Part of the advantage Nichols had in the recruitment was that her role as Lieutenant Uhura on Star Trek inspired young girls to become astronauts at NASA when they grow up. [ 9 ] One of these girls was Dr. Mae Jemison , the first black woman astronaut in 1992. [ 10 ] [ 11 ] Another important character in the 1970s was Dr. Carolyn Huntoon who turned down being an astronaut to serve on the astronaut selection committee. [ 7 ] NASA sent Huntoon around the United States to encourage women to apply as astronauts or to get into the STEM field. [ 7 ] In 1979, Kathryn Sullivan flew a NASA WB-57F reconnaissance aircraft to 63,300 feet altitude breaking an unofficial altitude record for American women. [ 7 ] On June 18, 1983, Sally Ride made history as the first American woman astronaut to go into space. [ 8 ] About more than a year later, Judith Resnik took the Space Shuttle Discovery into space and became the second American woman in space. [ 7 ] [ 8 ] In 1988, Ellen Ochoa joined NASA and became the first Hispanic woman astronaut. [ 12 ] Ochoa took on multiple missions that included Space shuttles Discovery , Atlantis , four flights, and almost 1,000 hours in space. In 1985, Shannon Lucid took on her first flight and by the end of her career she had spent 188 days in space. [ 7 ] Lucid set an American record, for both men and women, with the most number of days in space until 2002. [ 7 ] By the 1990s, NASA was doing a lot of research in women's bodies and the effects of space to their bodies. Carolyn Huntoon gave a speech in 1994 at the 2nd Annual Women's Health and Space Luncheon by giving light to the unrecognized work of NASA. [ 7 ] On February 3, 1995, history was made when Colonel Eileen Collins became the first woman to pilot an US spacecraft. [ 8 ] Meanwhile, Shannon Lucid , a board engineer, took on five missions in space and worked as chief scientist for NASA in Washington, DC . [ 8 ] Starting year 2000, the number of women in NASA's planetary missions started to increase. Women were most given roles as Co-Investigators and Participating Scientists. [ 13 ] From below 10% of women selected until the 1990s, this percentage started to increase in the 2000s up to around 30% of women, particularly, the women being given the role as Co-Investigators. [ 13 ] Pamela Melroy , for example, took on several missions to the International Space Station on the shuttles Discovery and Atlantis . [ 8 ] Not only was Melroy an astronaut but she was also a veteran military pilot who has more than 5,000 hours of flight time. [ 8 ] In 2007, Peggy Whitson became the first woman to command the International Space Station . [ 8 ] Aside from commanding, Whitson conducted dozens of tests in space that furthered space technologies that are still being used today. In the same year, Barbara Morgan became the first teacher in space; however, it was argued that Christa McAuliffe was announced in 1985 as the first teacher in space, and Barbara Morgan was only an alternate or secondary candidate. [ 8 ] In 1986, Christa McAuliffe died in Challenger accident and Morgan was unable to go to space until 2007. [ 8 ] Sunita Williams is known for holding many records for women, including 322 total days in space, spent over 50 hours walking in space and being the second women to command the ISS . [ 8 ] The unofficial program of Mercury 13 was considered as the start of inclusion of women in U.S space programs, wherein the first seven astronauts chosen for this project were all white men. [ 14 ] Randy Lovelace and Don Flickinger , who were involved in the selection process, considered including women for this project. [ 14 ] [ 15 ] [ 11 ] Lovelace thought that women can also do major tasks in space just like men. [ 16 ] [ 11 ] Through this, Lovelace and Flickinger met Jerrie Cobb , a woman, in 1960, who played a major role in recruiting and testing women. [ 14 ] [ 11 ] Women in Space Program (December 20, 1959) was the “revived” version of the Women in Space Earliest program that was cancelled in November 1959. [ 15 ] Similar to the program for men, this required candidate testings. [ 15 ] However, the parameters for these tests were varied to accommodate women. [ 15 ] In the screening phase, for example, men were required to be degree-holder jet pilots, went to military test pilot school, and with experience of minimum 1,500 hours of flying time. [ 15 ] Since women were deprived from some of these opportunities, screenings shifted to women with commercial pilot licenses, especially that women served as instructors during this time. [ 15 ] Cobb, who underwent the testing first, [ 11 ] became the leader of the FLATs (Fellow Lady Astronaut Trainees) with 12 other women, which made 13 women in total (hence, the media named it Mercury 13). [ 14 ] Even though Cobb was assigned as a NASA consultant and continued doing the testings, women were still not trained to be astronauts. [ 14 ] During the examinations for women, some scientists thought that women showed advantages for being sent to space rather than men. [ 17 ] For example, internal organs of women were assumed to be more suitable in radiation and vibrations. [ 17 ] Due to the relatively smaller size of women, spacecraft and flights would be less expensive if women were to use spacecraft. [ 17 ] However, testing for women were cancelled after it was discovered that NASA did not issue an official request for such action. [ 15 ] Lovelace decided to not continue the program and ended up in an uncomfortable situation at NASA. [ 15 ] Meanwhile, Jerrie Cobb, who assumed leadership and facilitated the testings for women, was removed from her position at NASA. [ 15 ] Since the first astronaut, Sally Ride, there have been 43 American women who have gone to space by the year 2012. [ 18 ] Outside of the U.S. there have only been 12 other women astronauts that have been in space. [ 18 ] As of 2009, about 10 percent of astronauts in NASA are women. [ 5 ]
https://en.wikipedia.org/wiki/Women_in_NASA
Many scholars and policymakers have noted that the fields of science, technology, engineering, and mathematics (STEM) have remained predominantly male with historically low participation among women since the origins of these fields in the 18th century during the Age of Enlightenment . [ 1 ] Scholars are exploring the various reasons for the continued existence of this gender disparity in STEM fields. Those who view this disparity as resulting from discriminatory forces are also seeking ways to redress this disparity within STEM fields (these are typically construed as well-compensated, high-status professions with universal career appeal). [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Women's participation in science, technology, and engineering has been limited [ 7 ] [ 8 ] [ 9 ] and also under-reported throughout most of history. [ 10 ] [ 11 ] This has been the case, with exceptions, until large-scale changes began around the 1970s. Scholars have discussed possible reasons and mechanisms behind the limitations such as ingrained gender roles , [ 12 ] sexism , [ 13 ] [ 14 ] and sex differences in psychology . [ 15 ] [ 16 ] [ 17 ] [ 18 ] There has also been an effort among historians of science to uncover under-reported contributions of women. [ 19 ] [ 20 ] [ 21 ] The "Computer Women" at NASA during the 1950s and 1960s, a group of women known as "computers" at NASA performed essential calculations for aeronautical and space research. They worked as mathematicians, engineers, and analysts, laying the groundwork for early space exploration, even though their contributions were often overlooked. Citation: The "Computers" Who Helped Launch NASA’s Space Program. (2017). Smithsonian Magazine. Smithsonian The term STEM was first used in 2001, [ 22 ] primarily in connection with the choice of education and career. Different STEM fields have different histories, but women's participation, although limited, has been seen throughout history. Science, protoscience and mathematics have been practiced since ancient times, and during this time women have contributed to such fields as medicine , botany , astronomy , algebra , and geometry . In the Middle Ages in Europe and the Middle East, Christian monasteries and Islamic madrasas were places where women could work on such subjects as mathematics and the study of nature. [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] The First Computer Programmer Was a Woman Ada Lovelace, an English mathematician, is often credited as the world’s first computer programmer. In the mid-1800s, she worked on Charles Babbage’s proposed mechanical computer, the Analytical Engine. She created an algorithm intended to be processed by the machine, making her a pioneer in the field of computer science. Citation: Somers, M. (2016). Ada Lovelace: The First Computer Programmer. The Royal Society Universities in the Christian tradition began as places of education of a professional clergy that allowed no women, and the practice of barring women continued even after universities' missions broadened. [ 31 ] Because women were generally barred from formal higher education until late in the 19th century, it was very difficult for them to enter specialized disciplines. [ 32 ] The development of industrial technology was dominated by men, and early technical achievements, such as the invention of the steam engine, were mainly due to men. [ 33 ] Nevertheless, there are many examples of women's contributions to engineering . [ 34 ] Initially a "computer" was a person doing computations , who was often a woman. [ 35 ] Working as a computer required conscientiousness, accuracy and speed. [ 36 ] Some women who initially worked as human computers later advanced from doing simpler calculations to higher levels of work, where they specified tasks and algorithms and analyzed results. [ 37 ] Women's participation rates in the STEM fields started increasing noticeably in the 1970s and 1980s. [ 38 ] Some fields, such as biotechnology , now have almost 50% participation of women. [ 39 ] Studies suggest that many factors contribute to the attitudes towards the achievement of young men in mathematics and science , including encouragement from parents, interactions with mathematics and science teachers, curriculum content, hands-on laboratory experiences, high school achievement in mathematics and science, and resources available at home. [ 41 ] In the United States , research findings are mixed concerning when boys' and girls' attitudes about mathematics and science diverge. Analyzing several nationally representative longitudinal studies , [ 42 ] one researcher found few differences in girls' and boys' attitudes toward science in the early secondary school years. [ 41 ] Students' aspirations to pursue careers in mathematics and science influence both the courses they choose to take in those areas and the level of effort they put forth in these courses. A 1996 USA study suggested that girls begin to lose self-confidence in middle school because they believe that men possess more intelligence in technological fields. [ 43 ] [ 44 ] The fact that men outperform women in some measures of spatial ability , [ 45 ] a skill set many engineering professionals deem vital, generates this misconception. [ 4 ] Feminist scholars postulate that boys are more likely to gain spatial skills outside the classroom because they are culturally and socially encouraged to build and work with their hands. [ 46 ] Research shows that girls can develop these same skills with the same form of training. [ 47 ] [ 48 ] A 1996 USA study of college freshmen by the Higher Education Research Institute shows that men and women differ greatly in their intended fields of study. Of first-time college freshmen in 1996, 20 percent of men and 4 percent of women planned to major in computer science and engineering , [ 49 ] while similar percentages of men and women planned to major in biology or physical sciences . [ 50 ] The differences in the intended majors between male and female first-time freshmen directly relate to the differences in the fields in which men and women earn their degree. At the post-secondary level, women are less likely than men to earn a degree in mathematics, physical sciences, or computer sciences and engineering. The exception to this gender imbalance is in the field of life science . [ 51 ] [ 52 ] In Scotland , a large number of women graduate in STEM subjects but fail to move onto a STEM career compared to men. The Royal Society of Edinburgh estimates that doubling women's high-skill contributions to Scotland's economy would benefit it by £170 million per annum. [ 53 ] [ 54 ] A 2017 study found that closing the gender gap in STEM education would have a positive impact on economic growth in the EU, contributing to an increase in GDP per capita of 0.7–0.9% across the bloc by 2030 and of 2.2–3.0% by 2050. [ 55 ] [ 56 ] Female college graduates earned less on average than male college graduates, even though they shared the earnings growth of all college graduates in the 1980s. Some of the differences in salary are related to the differences in occupations entered by women and men. Among recent science and engineering bachelor's degree recipients, women were less likely than men to be employed in science and engineering occupations. There remains a wage gap between men and women in comparable scientific positions. Among more experienced scientists and engineers, the gender gap in salaries is greater than for recent graduates. [ 57 ] Salaries are highest in mathematics, computer science, and engineering, which are fields in which women are not highly represented. In Australia , a study conducted by the Australian Bureau of Statistics has shown that the current gender wage gap between men and women in STEM fields in Australia stands at 30.1 percent as of 2013, which is an increase of 3 percent since 2012. [ 58 ] In addition, according to a study done by Moss, [ 59 ] when faculty members of top research institutions in America were asked to recruit student applicants for a laboratory manager position, both men and women faculty members rated the male applicants as more hire-able and competent for the position, as opposed to the female applicants who shared an identical resume with the male applicants. In the Moss study, faculty members were willing to give the male applicants a higher starting salary and career mentoring opportunities. [ 59 ] The percentage of Ph.D. in STEM fields in the U.S. earned by women is about 42%, [ 60 ] whereas the percentage of Ph.D. in all fields earned by women is about 52%. [ 61 ] Stereotypes and educational differences can lead to the decline of women in STEM fields. These differences start as early as the third grade according to Thomas Dee , with boys advancing in math and science and girls advancing in reading. [ 62 ] According to UNESCO, in 2023, 122 million girls globally are out of school, and women still account for nearly two-thirds of all adults who cannot read. [ 63 ] UNESCO , among other agencies including the European Commission and The Association of Academies and Societies of Sciences in Asia (AASSA), have been outspoken about the underrepresentation of women in STEM fields globally. [ 64 ] [ 65 ] [ 66 ] [ 67 ] Despite their efforts to compile and interpret comparative statistics, it is necessary to exercise caution. Ann Hibner Koblitz has commented on the obstacles regarding the making of meaningful statistical comparisons between countries: [ 68 ] For a variety of reasons, it is difficult to obtain reliable data on international comparisons of women in STEM fields. Aggregate figures do not tell us much, especially since terminology describing educational levels, content of majors, job categories, and other markers varies from country to country. Even when different countries use the same definitions of terms, the social significance of the categories may differ considerably. Koblitz remarks: [ 69 ] It is not possible to use the same indicators to determine the situation in every country. The significant statistic might be the percentage of women teaching at the university level. But it might also be the proportion of women at research institutes and academies of sciences (and at what level), or the percentage of women who publish (or who publish in foreign as opposed to domestic journals), or the proportion of women who go abroad for conferences, post-graduate study, and so on, or the percentage of women awarded grants by national and international funding agencies. Indices can have different meanings in different countries, and the prestige of various positions and honors can vary considerably. According to UNESCO statistics, 30% of the Sub-Saharan tech workforce are women; this share rose to 33.5 percent in 2018. [ 70 ] [ 67 ] South Africa features among the top 20 countries in the world for the share of professionals with skills in artificial intelligence and machine learning, with women representing 28 percent of these South African professionals. [ 67 ] A fact sheet published by UNESCO in March 2015 [ 71 ] presented worldwide statistics of women in the STEM fields, with a focus on Asia and the Pacific region. It reports that, worldwide, 30 percent of researchers are women; as of 2018, this share had increased to 33 percent. [ 67 ] In these areas, East Asia, the Pacific, South Asia and West Asia had the most uneven balance, with 20 percent of researchers being women in each of those sub-regions. Meanwhile, Central Asia had the most equal balance in the region, with women comprising 46 percent of its researchers. The Central Asian countries Azerbaijan and Kazakhstan were the only countries in Asia with women as the majority of their researchers, though in both cases it was by a very small margin. [ 71 ] As at 2004, 13.9% of students enrolled in science programs in Cambodia were female and 21% of researchers in science, technology, and innovation fields were female as of 2002. These statistics are significantly lower than those of other Asian countries such as Malaysia , Mongolia , and South Korea . According to a UNESCO report on women in STEM in Asian countries, Cambodia's education system has a long history of male dominance stemming from its male-only Buddhist teaching practices. Starting in 1924, girls were allowed to enroll in school. Bias against women, not only in education but in other aspects of life as well, exists in the form of traditional views of men as more powerful and dignified than women, especially in the home and in the workplace, according to UNESCO's A Complex Formula . [ 64 ] UNESCO's A Complex Formula states that Indonesia 's government has been working towards gender equality, especially through the Ministry of Education and Culture , but stereotypes about women's roles in the workplace persist. Due to traditional views and societal norms , women struggle to remain in their careers or to move up in the workplace. Substantially more women are enrolled in science-based fields such as pharmacy and biology than in mathematics and physics . Within engineering, statistics vary based on the specific engineering discipline; women make up 78% of chemical engineering students but only 5% of mechanical engineering students. As of 2005, out of 35,564 researchers in science, technology, and engineering, only 10,874 or 31% were female. [ 64 ] According to OECD data, about 25 percent of enrollment in STEM-related programs at the tertiary education level in Japan are women. [ 72 ] Japan has the lowest share of women in tertiary teaching staff among OECD countries, with only 28% of female faculty members, far below the OECD average of 44%. Women make up just 17.7% of teaching staff at national universities, with only 10.8% in science and engineering fields and 9.4% in executive positions. Additionally, female enrollment in natural sciences, mathematics, and statistics stands at 27% (OECD average: 52%), while in engineering, manufacturing, and construction, it is just 16% (OECD average: 26%)." [ 73 ] According to OECD data, about 66 percent of enrollment in STEM-related programs at the tertiary education level in Kazakhstan are women. [ 72 ] Despite strong enrollment rates, women in Kazakhstan remain underrepresented in STEM leadership roles. The government, along with international organizations, has introduced mentorship programs, scholarships, and leadership training to encourage more women to enter and stay in STEM careers. These initiatives aim to close the gender gap and promote inclusivity in high-tech industries. [ 74 ] According to UNESCO, 48.19% of students enrolled in science programs in Malaysia were female as of 2011. This number has grown significantly in the past three decades, during which the country's employment of women has increased by 95%. In Malaysia, over 50% of employees in the computer industry, which is generally a male-dominated field within STEM, are women. Of students enrolled in pharmacy, more than 70% are female, while in engineering only 36% of students are female. Women held 49% of research positions in science, technology, and innovation as of 2011. [ 64 ] According to UNESCO's data from 2012 and 2018 respectively, 40.2% of students enrolled in science programs and 49% of researchers in science, technology, and innovation in Mongolia are female. Traditionally, nomadic Mongol culture was fairly egalitarian, with both women and men raising children, tending livestock, and fighting in battle, which mirrors the relative equality of women and men in Mongolia's modern-day workforce. More females than males pursue higher education and 65% of college graduates in Mongolia are women. However, women earn about 19–30% less than their male counterparts and are perceived by society to be less suited to engineering than men. Thirty percent or less of employees in computer science, construction architecture, and engineering are female while three in four biology students are female. [ 64 ] As of 2011, 26.17% of Nepal 's science students were women and 19% of their engineering students were also women. In research, women held 7.8% of positions in 2010. These low percentages correspond with Nepal's patriarchal societal values. In Nepal, women that enter STEM fields most often enter forestry or medicine, specifically nursing , which is perceived as a predominantly female occupation in most countries. [ 64 ] In 2012, 30.63% of students who enrolled in science programs in South Korea were female, a number that has been increasing since the digital revolution. Numbers of male and female students enrolled at most levels of education are comparable as well, though the gender difference is larger in higher education. Confucian beliefs in the lower societal value of women as well as other cultural factors could influence South Korea's STEM gender gap. In South Korea, as in other countries, the percentage of women in medicine (61.6%) is much higher than the percentage of women in engineering (15.4%) and other more math-based stem fields. In research occupations in science, technology, and innovation, women made up 17% of the workforce as of 2011. In South Korea, most women working in STEM fields are classified as "non-regular" or temporary employees, indicating poor job stability. [ 64 ] In a study conducted by the University of Glasgow which examined math anxiety and test performance of boys and girls from various countries, researchers found that South Korea had a high sex difference in mathematics scores, with female students scoring significantly lower than and experiencing more math anxiety on math tests than male students. [ 75 ] According to OECD data, about 53 percent of enrollment in STEM-related programs at the tertiary education level in Thailand are women. [ 72 ] Ann Hibner Koblitz reported on a series of interviews conducted in 2015 in Abu Dhabi with women engineers and computer scientists who had come to the United Arab Emirates and other Gulf states to find opportunities that were not available to them in their home country. The women spoke of a remarkably high level of job satisfaction and relatively little discrimination. [ 68 ] Koblitz comments that ...most people in most countries outside of the Middle East have no idea that the region, in particular the UAE, is a magnet for young, dynamic Arab women making successful careers for themselves in a variety of high-tech and other scientific fields; "land of opportunity," "a tech-person's paradise," and yes, even "mecca" were among the terms used to describe the UAE by the women I met. Nearly half of PhD degrees pursued in Central and South America are completed by women (2018). However, only a small minority is represented at decision-making levels. [ 76 ] A 2018 study gathered 6,849 articles published in Latin America and found that women researchers were 31% of published researchers in 2018, an increase from 27% in 2002. [ 77 ] The same study also found that when women lead the research group, women contributors were published 60%, compared to when men are the leaders and the women contributors were published 20%. [ 77 ] When looking at over 1,500 articles related to Botany published in Latin America, a study found that participation from both women and men were equal, whether it be in publications or leading roles in scientific organizations. [ 78 ] Also women had higher rates of publication in Argentina, Brazil, and Mexico when compared to other Latin American countries despite participation being nearly the same throughout the region. [ 78 ] Although women have higher publications in Botany, men still out publish women and are often the ones cited in research papers and studies relating to the sciences. [ 78 ] The study concluded that according to the data (shown in the table above), women in Chile that are enrolled in STEM have higher enrollment in the sciences closely related to Biology and Medicine than other sciences in the technological field. [ 79 ] After graduation women made up 67.70% of the workers in Engineering in Health and 59.80% of workers in Biomedical Engineering. While in other fields, such as Mechanical Engineering or Electrical Engineering (the more technical fields), men dominated the workforce with over 90% of workers being male. [ 79 ] In the European Union only 16.7% on average of ICT (Information and communication technology) specialists are women. Only in Romania and Bulgaria do women hold more than 25 percent of these roles. The gender distribution is more balanced, particularly in new member states when taking into account ICT technicians (middle and low-ranking positions). [ 40 ] In 2012, the percentage of women PhD graduates was 47.3% of the total, 51% of the social sciences, business and law, 42% of the science, mathematics and computing, and just the 28% of PhD graduates in engineering, manufacturing and construction. In the computing subfield only 21% of PhD graduates were women. In 2013 in the EU as an average men scientists and engineers made up 4.1% of total labour force, while women made up only 2.8%. In more than half of the countries women make up less than 45% of scientists and engineers. The situation has improved, as between 2008 and 2011 the number of women amongst employed scientists and engineers grew by an average of 11.1% per year, while the number of men grew only by 3.3% over the same period. [ 80 ] In 2015, in Slovenia , Portugal , France , Sweden , Norway , and Italy there were more boys than girls taking advanced courses in mathematics and physics in secondary education in Grade 12. [ 81 ] In 2018, European Commissioner for Digital Economy and Society Mariya Gabriel announced plans to increase the participation of women in the digital sector by challenging stereotypes; promoting digital skills and education and advocating for more women entrepreneurs. [ 82 ] In 2018, Ireland took the step of linking research funding from the Higher Education Authority to an institution's ability to reduce gender inequality. [ 67 ] According to the National Science Foundation , women comprise 43 percent of the U.S. workforce for scientists and engineers (S&E) under 75 years old. [ 83 ] For those under 29 years old, women comprise 56% of the science and engineering workforce. Of scientists and engineers seeking employment, 50% under 75 are women, and 49% under 29 are women. About one in seven engineers are female. [ 84 ] However, women comprise 28% of workers in S&E occupations - not all women who are trained as S&E are employed as scientists or engineers. [ 85 ] Women hold 58% of S&E related occupations. [ 85 ] Women in STEM fields earn considerably less than men, even after controlling for a wide set of characteristics such as education and age. On average, men in STEM jobs earn $36.34 per hour while women in STEM jobs earn $31.11 per hour. [ 84 ] There are many reasons why gender pay gaps in STEM fields continue to exist which include women choosing STEM majors that pay less. However, even with the same degree, women still earned less. A research study on starting pay with an engineering degree found that women earned less than $61,000 while men earned more than $65,000. [ 86 ] Women dominate the total number of persons with bachelor's degrees, as well as those in STEM fields defined by the National Center for Education Statistics . However, they are underrepresented in specific fields including Computer Sciences, Engineering, and Mathematics. Along with women, racial/ethnic minorities in the United States are also underrepresented in STEM. Asian women are well represented in STEM fields in the U.S.(though not as much as males of the same ethnicity) compared to African American, Hispanic, Pacific Islander, and Native American women. [ 87 ] Within academia, these minority women represent less than 1% of tenure-track positions in the top 100 U.S. universities despite constituting approximately 13% of total US population. [ 88 ] A 2015 study suggested that attitudes towards hiring women in STEM tenure track positions has improved, with a 2:1 preference for women in STEM after adjusting for equal qualifications and lifestyles (e.g., single, married, divorced). [ 89 ] According to Kimberly Jackson, prejudice and assumed stereotypes keep women of color, especially black women from studying in STEM fields. Psychologically, stereotypes on black women's intellect, cognitive abilities, and work ethic contribute to their lack of confidence in STEM. Some schools, such as Spelman College , have made attempts to change perceptions of African-American women and improve their rates of becoming involved and technically proficient in STEM. [ 90 ] Students of color, especially Black students, face difficulty in STEM majors as they face hostile climates, microaggressions, and a lack of support and mentorship. Despite facing discrimination, many African American women have risen to prominence in STEM fields, starting in the mid-1800s, when physician Rebecca Lee Crumpler was the first African American woman to earn a medical degree. In our day major scientific advances have been made by African American women such as Dr. Kizzmekia Corbett , who contributed to developing COVID-19 vaccines; Dr. Ayanna Howard , a leader in robotics and artificial intelligence; and Dr. Hadiyah-Nicole Green , a physicist known for her work in cancer treatments using lasers. Several organizations have worked to help African American women obtain the support needed to be successful in STEM; some of them include Sisters in STEM , Black Girls Do Stem , STEMNoire , and BWIStem. A 2015 NCWIT study estimated that Latin American women represented only 1% of the US tech workforce. [ 91 ] A 2018 study on 50 Latin American women who founded a technology company indicated that 20% were Mexican, 14% bi-racial, 8% unknown, 4% Venezuelan. [ 92 ] A Statistics Canada study from 2019 found that first-year women make up 44% of STEM students, compared with 64% of non-STEM students. Those women who transfer out of STEM courses usually go to a related field, such as health care or finance. [ 93 ] A study conducted by the University of British Columbia discovered that only 20–25% of computer science students from all Canadian colleges and universities are women. As well, only about 1 in 5 of that percentage will graduate from those programs. [ 94 ] Statistically, women are less likely to choose a STEM program, regardless of mathematical ability. Young men with lower marks in mathematics are more likely to pursue STEM fields than their women-identified peers with higher marks in mathematics. [ 95 ] Australia has only recently made significant attempts to promote participation of women in STEMM disciplines, including the formation of Women in STEMM Australia in 2014, a non-profit organisation that aims at connecting women in STEMM disciplines in a coherent network. [ 96 ] Similarly, the STEM Women directory has been established to promote gender equity by showcasing the diversity of talent in Australian women in STEM fields. [ 97 ] In 2015, the SAGE (Science in Australia Gender Equity) was started as a joint venture of the Australian Academy of Science and the Australian Academy of Technology and Engineering . [ 98 ] The program is tasked with implementing a pilot of the Athena SWAN accreditation framework within Australian higher education institutions. In terms of the most prestigious awards in STEM fields , [ 99 ] fewer have been awarded to women than to men. Between 1901 and 2017 the female:total ratio of Nobel Prizes were 2:207 for physics , [ 100 ] 4:178 for chemistry, 12:214 for physiology/medicine , [ 101 ] [ 23 ] and 1:79 for economic sciences . [ 102 ] The ratios for other fields were 14:114 in literature and 16:104 for peace. [ 103 ] [ 104 ] Maryam Mirzakhani was the first woman and first Iranian to receive the Fields Medal in 2014. [ 105 ] [ 106 ] The Fields Medal, is one of the most prestigious prize in mathematics , [ 107 ] and has been awarded 56 times in total. Fewer female students participate in prestigious STEM-related competitions such as the International Mathematical Olympiad . In 2017, only 10% of the IMO participants were female and there was one female on the South Korean winning team of six. [ 108 ] [ 109 ] Abbiss states that "the ubiquity of computers in everyday life has seen the breaking down of gender distinctions in preferences for and the use of different applications, particularly in the use of the internet and email." [ 110 ] Both genders have acquired skills, competencies and confidence in using a variety of technological , [ 111 ] mobile and application tools for personal, [ 112 ] educational and professional use at high school level, but the gap still remains when it comes to enrollment of girls in computer science classes, which declines from grades 10 to 12. For higher education programs in information and communications technology, women make up only 3% of graduates globally. [ 113 ] [ 81 ] A review of UK patent applications, in 2016, found that the proportion of new inventions registered by women was rising, but that most female inventors were active in stereotypically female fields such as "designing bras and make-up". 94% of inventions in the field of computing, 96% in automotive applications and mining, and 99% in explosives and munitions, were by men. [ 114 ] [ 115 ] In 2016 Russia had the highest percentage of patents filed by women, at about 16%. Then in 2019, the USPTO issued a report showing that the share of female inventors listed on US patents had recently risen to about 17%. [ 116 ] There are a variety of proposed reasons for the relatively low numbers of women in STEM fields. These can be broadly classified into societal, psychological, and innate explanations. However, explanations are not necessarily restricted to just one of these categories. This leakage may be due to discrimination , [ 117 ] both overt and covert, faced by women in STEM fields. [ 118 ] According to Schiebinger, women are twice as likely to leave jobs in science and engineering than men are. [ 119 ] : 33 In the 1980s, researchers demonstrated a general evaluative bias against women. [ 120 ] In a 2012 study, email requests were sent to meet to professors in doctoral programs at the top 260 U.S. universities. It was impossible to determine whether any particular individual in this study was exhibiting discrimination, since each participant only viewed a request from one potential graduate student. However, researchers found evidence for discrimination against ethnic minorities and women relative to Caucasian men. [ 121 ] In another study, science faculty were sent the materials of students who were applying for a lab manager position at their university. [ 59 ] The materials were the same for each participant, but each application was randomly assigned either a male or a female name. The researchers found that faculty members rated the male candidates as both more competent and more hirable than the female candidates, despite applications being otherwise identical. [ 59 ] If individuals are given information about a prospective student's gender, they may infer that he or she possesses traits consistent with stereotypes for that gender. [ 122 ] A study in 2014 found that men are favored in some domains, such as tenure rates in biology, but that the majority of domains were gender-fair. The authors interpreted this to suggest that the underrepresentation of women in the professorial ranks was not solely caused by sexist hiring, promotion, and remuneration. [ 123 ] Audery Azoulay , UNESCO Chief, stated that even in, "21st century, women and girls are sidelined in science-related fields due to their gender." [ 124 ] A 2017 survey showed that women working in the STEM fields are more likely to experience workplace discrimination than men. [ 125 ] Around half of the women in the STEM profession have experienced gender-based discrimination, such as the man being paid more for the same job, being treated like they do not qualify for the job, or being mocked or insulted. [ 125 ] Some women also stated that in a workplace where most employees were male, they felt that being a woman was a barrier to their success. [ 125 ] Stereotypes about what someone in a STEM field should look and act like may cause established members of these fields to overlook individuals who are highly competent. [ 126 ] The stereotypical scientist or individual in another STEM profession is usually thought to be male. [ 127 ] Women in STEM fields may not fit individuals' conception of what a scientist, engineer, or mathematician "should" look like and may thus be overlooked or penalized. The Role Congruity Theory of Prejudice states that perceived incongruity between gender and a particular role or occupation can result in negative evaluations. [ 128 ] [ 129 ] [ 130 ] In addition, negative stereotypes about women's quantitative abilities may lead people to devalue their work or discourage these women from continuing in STEM fields. [ 131 ] Both men and women who work in "nontraditional" occupations may encounter discrimination, but the forms and consequences of this discrimination are different. Individuals of a particular gender are often perceived to be better suited to particular careers or areas of study than those of the other gender. [ 132 ] [ 133 ] A study found that job advertisements for male-dominated careers tended to use more agentic words (or words denoting agency, such as "leader" and "goal-oriented") associated with male stereotypes. [ 132 ] Social Role Theory, proposed in 1991, states that men are expected to display agentic qualities and women to display communal qualities. [ 134 ] These expectations can influence hiring decisions. [ 135 ] A 2009 study found that women tended to be described in more communal terms and men in more agentic terms in letters of recommendation. These researchers also found that communal characteristics were negatively related to hiring decisions in academia. [ 135 ] Although women entering traditionally male professions face negative stereotypes suggesting that they are not "real" women, these stereotypes do not seem to deter women to the same degree that similar stereotypes may deter men from pursuing nontraditional professions. There is historical evidence that women flock to male-identified occupations once opportunities are available. [ 136 ] On the other hand, examples of occupations changing from predominantly female to predominantly male are very rare in human history. The few existing cases—such as medicine—suggest that redefinition of the occupations as appropriately masculine is necessary before men will consider joining them. [ 137 ] Although men in female-dominated occupations may contend with negative stereotypes about their masculinity, they may also experience certain benefits. In 1992 it was suggested that women in male-dominated occupations tended to hit a glass ceiling ; while men in female-dominated occupations may hit a "glass escalator". [ 138 ] The Black Sheep effect occurs when individuals are likely to evaluate members of their in-group more favorably than members of their out-group when those members are highly qualified. [ 139 ] [ 140 ] [ 141 ] [ 142 ] However, when an individual's in-group members have average or below average qualities, they are likely to evaluate them much lower than out-group members with equivalent qualifications. [ 139 ] [ 140 ] [ 141 ] [ 142 ] This suggests that established women in STEM fields will be more likely than established men to help early career women who display sufficient qualifications. However, established women will be less likely than men to help early career women who display insufficient qualifications. The Queen Bee effect is similar to the Black Sheep effect but applies only to women. It explains why higher-status women, particularly in male-dominated professions, may actually be far less likely to help other women than their male colleagues might be. [ 143 ] [ 144 ] A 2004 study found that while doctoral students in a number of different disciplines did not exhibit any gender differences in work commitment or work satisfaction, faculty members at the same university believed that female students were less committed to their work than male students. [ 144 ] What was particularly surprising was that these beliefs by faculty members were most strongly endorsed by female faculty members, rather than male faculty members. [ 144 ] One potential explanation for this finding is that individual mobility for a member of a negatively stereotyped group is often accompanied by a social and psychological distancing of oneself from the group. This implies that successful women in traditionally male-dominated careers do not see their success as evidence that negative stereotypes about women's quantitative and analytical abilities are wrong, but rather as proof that they personally are exceptions to the rule. [ 144 ] Thus, such women may actually play a role in perpetuating, rather than abolishing, these negative stereotypes. In STEM fields, the support and encouragement of a mentor can make a lot of difference in women's decisions of whether or not to continue pursuing a career in their discipline. [ 145 ] [ 146 ] This may be particularly true for younger individuals who may face many obstacles early on in their careers. [ 6 ] Since these younger individuals often look to those who are more established in their discipline for help and guidance, the responsiveness and helpfulness of potential mentors is incredibly important. There are many emerging mentorship programs. However, many women experience harassment from their mentors which can cause them to be unable to finish the program among many other issues. A 2020 study surveyed women who are working in STEM field and live in the U.S., Northeast, and Eastern Canada. [ 147 ] Most women reported that finding a mentor at their workplace was complex, and only a third of the women had some sort of mentor, formal or informal. [ 147 ] During their time in school, half of the participants were able find a professor to be their mentor. They added that mentorship helped them complete their degree and guided them from the educational sphere to the workplace. [ 147 ] The majority of the women agreed that mentorship is a crucial resource, and many want to be involved in mentorship, but there are not enough resources or opportunities in their work environment. [ 147 ] Women in STEM may leave due to not being invited to professional meetings, the use of sexually discriminating standards against women, inflexible working conditions, the perceived need to hide pregnancies, and the struggle to balance family and work. Women in STEM fields that have children either need child care or to take a long leave of absence. When a nuclear family can not afford child care, typically it is the mother that gives up her career to stay at home with the children. [ 148 ] This is due in part to women being paid statistically less in their careers. The man makes more money so the man goes to work and the woman gives up her career. Maternity leave is another issue women in STEM fields face. In the U.S. , maternity leave is required by The Family and Medical Leave Act of 1993 (FMLA). [ 149 ] The FMLA requires 12 weeks of unpaid leave annually for mothers of newborn or newly adopted children. This is one of the lowest levels of leave in the industrialized world. All developed countries except the United States guarantee mothers at least some paid time off. [ 150 ] [ 151 ] If a new mother does not have external financial support or savings, they may not be able to take their full maternity leave. Few companies allow men to take paternity leave and it may be shorter than women's maternity leave. [ 152 ] In 1993, The New England Journal of Medicine indicated that three-quarters of women students and residents were harassed at least once during their medical training. [ 119 ] : 51 The 2020 Tribeca Film Festival documentary, " Picture a Scientist ", highlighted the severe sexual and physical harassment women in STEM fields can face, often without adequate recourse. In that film Jane Willenbring , a female scientist and associate professor at Scripps Institution of Oceanography, shared how she was harassed by her mentor David R. Marchant during her fieldwork. She was called many demeaning names, harassed when using the bathroom, and even had shards of volcanic sand blown into her eyes. In engineering and science education, women made up almost 50 percent of non-tenure track lecturer and instructor jobs, but only 10 percent of tenured or tenure-track professors in 1996. In addition, the number of female department chairs in medical schools did not change from 1976 to 1996. [ 153 ] Moreover, women who do make it to tenured or tenure-track positions may face the difficulties associated with holding a token status. They may lack support from colleagues and may face antagonism from peers and supervisors. [ 154 ] Research has suggested that women's lack of interest may in part stem from stereotypes about employees and workplaces in STEM fields, to which stereotypes women are disproportionately responsive. [ 155 ] [ 156 ] [ 157 ] [ 158 ] In the early 1980s, Rossiter put forth the concept of "territorial segregation" or occupational segregation , which is the idea that women "cluster" in certain fields of study. [ 119 ] : 34 For example, "women are more likely to teach and do research in the humanities and social sciences than in the natural sciences and engineering", [ 119 ] : 34 and the majority of college women tend to choose majors such as psychology, education, English, performing arts, and nursing. [ 159 ] Rossiter also used "hierarchical segregation" as an explanation for the low number of women in STEM fields. [ clarification needed ] She describes "hierarchical segregation" as a decrease in the number of women as one "moves up the ladder of power and prestige." [ 119 ] : 33 This is related to the leaky STEM pipeline concept. The metaphor of the leaky pipeline has been used to describe how women drop out of STEM fields at all stages of their careers. In the U.S., out of 2,000 high school aged persons, 1944 were enrolled in high school fall 2014. [ 160 ] Assuming equal enrollment for boys and girls, 60 boys and 62 girls are considered "gifted." [ 161 ] By comparing enrollment to the population of persons 20–24 years old, 880 of the 1,000 original women, and 654 of the original 1,000 men will enroll in college (2014). [ 162 ] [ 163 ] In freshman year 330 women and 320 men will express an intent to study science or engineering. [ 164 ] Of these only 142 women and 135 men will actually obtain a bachelor's degree in science or engineering, [ 162 ] [ 165 ] and only 7 women and 10 men will obtain a PhD in science or engineering. [ 162 ] [ 166 ] [ 60 ] A meta-analysis concluded that men prefer working with things and women prefer working with people. When interests were classified by RIASEC type (Realistic, Investigative, Artistic, Social, Enterprising, Conventional), men showed stronger Realistic and Investigative interests, and women showed stronger Artistic, Social, and Conventional interests. Sex differences were also found for more specific measures of interest in engineering, science, and mathematics, where men favored these interests. [ 167 ] In a 3-year interview study, Seymour and Hewitt (1997) found that perceptions that non-STEM academic majors offered better education options and better matched their interests was the most common (46%) reason provided by female students for switching majors from STEM areas to non-STEM areas . The second most frequently cited reason given for switching to non-STEM areas was a reported loss of interest in the women's chosen STEM majors. Additionally, 38% of female students who remained in STEM majors expressed concerns that there were other academic areas that might be a better fit for their interests. [ 168 ] Preston's (2004) survey of 1,688 individuals who had left sciences also showed that 30 percent of the women endorsed "other fields more interesting" as their reason for leaving. [ 169 ] Advanced math skills do not often lead women to be interested in a STEM career. A Statistics Canada survey found that even young women of high mathematical ability are much less likely to enter a STEM field than young men of similar or even lesser ability. [ 170 ] A 2018 study originally claimed that countries with more gender equality had fewer women in science, technology, engineering and mathematics ( STEM ) fields. Some commentators argued that this was evidence of gender differences arising in more progressive countries, the so-called gender-equality paradox . However, a 2019 correction to the study outlined that the authors had created a previously undisclosed and unvalidated method to measure "propensity" of women and men to attain a higher degree in STEM, as opposed to the originally claimed measurement of "women's share of STEM degrees". Harvard researchers were unable to independently recreate the data reported in the study. A follow-up paper by the researchers who discovered the discrepancy found conceptual and empirical problems with the gender-equality paradox in STEM hypothesis. [ 171 ] [ 172 ] [ 173 ] [ 174 ] [ 175 ] [ 176 ] [ 177 ] [ 178 ] According to A. N. Pell, the pipeline has several major leaks spanning the time from elementary school to retirement. [ 153 ] One of the most important periods is adolescence. One of the factors behind girls' lack of confidence might be unqualified or ineffective teachers. Teachers' gendered perceptions on their students' capabilities can create an unbalanced learning environment and deter girls from pursuing further STEM education. [ 179 ] They can also pass these stereotyped beliefs onto their students. [ 180 ] Studies have also shown that student-teacher interactions affect girls' engagement with STEM. [ 181 ] [ 182 ] [ 81 ] Teachers often give boys more opportunity to figure out the solution to a problem by themselves while telling the girls to follow the rules. [ 119 ] : 56 Teachers are also more likely to accept questions from boys while telling girls to wait for their turns. [ 153 ] This is partly due to gender expectations that boys will be active but that girls will be quiet and obedient. [ 154 ] Prior to 1985, girls were provided fewer laboratory opportunities than boys. [ 153 ] In middle and high school, science, mathematics, mechanics and computers courses are mainly taken by male students and also tend to be taught by male teachers. [ 183 ] A lack of opportunities in STEM fields could lead to a loss of self-esteem in math and science abilities, and low self-esteem could prevent people from entering science and math fields. [ 153 ] One study found that women steer away from STEM fields because they believe they are not qualified for them; the study suggested that this could be fixed by encouraging girls to participate in more mathematics classes. [ 184 ] Out of STEM-intending students, 35% of women stated that their reason for leaving calculus was due to lack of understanding the material, while only 14% of men stated the same. [ 185 ] The study reports that this difference in reason for leaving calculus is thought to develop from women's low level of confidence in their ability, and not actual skill. This study continues to establish that women and men have different levels of confidence in their ability and that confidence is related to how individual's performance in STEM fields. [ 185 ] It was seen in another study that when men and women of equal math ability were asked to rate their own ability, women will rate their own ability at a much lower level. [ 186 ] Programs with the purpose to reduce anxiety in math or increase confidence have a positive impact on women continuing their pursuit of a career in the STEM field. [ 187 ] Not only can the issue of confidence keep women from even entering STEM fields, but even women in upper-level courses with higher skill are more strongly affected by the stereotype that they (by nature) do not possess innate ability to succeed. [ 188 ] This can cause a negative effect on confidence for women despite making it through courses designed to filter students out of the field. Being chronically outnumbered and underestimated can fuel feelings of imposter syndrome reported by many women in the STEAM field. [ 189 ] Stereotype threat arises from the fear that one's actions will confirm a negative stereotype about one's in-group. This fear creates additional stress, consuming valuable cognitive resources and lowering task performance in the threatened domain. [ 190 ] [ 191 ] [ 192 ] Individuals are susceptible to stereotype threat whenever they are assessed in a domain for which there is a perceived negative stereotype about a group to which they belong. Stereotype threat undermines the academic performance of women and girls in math and science, which leads to an underestimation of abilities in these subjects by standard measures of academic achievement. [ 193 ] [ 131 ] Individuals who identify strongly with a certain area (e.g., math) are more likely to have their performance in that area hampered by stereotype threat than those who identify less strongly with the area. [ 192 ] This means that even highly motivated students from negatively stereotyped groups are likely to be adversely affected by stereotype threat and thus may come to disengage from the stereotyped domain. [ 192 ] Negative stereotypes about girls’ capabilities in mathematics and science drastically lower their performance in mathematics and science courses as well as their interest in pursuing a STEM career. [ 194 ] Studies have found that gender differences in performance disappear if students are told that there are no gender differences on a particular mathematics test. [ 193 ] This indicates that the learning environment can greatly impact success in a course. Stereotype threat has been criticized on a theoretical basis. [ 195 ] [ 196 ] Several attempts to replicate its experimental evidence have failed. [ 196 ] [ 197 ] [ 198 ] [ 199 ] The findings in support of the concept have been suggested to be the product of publication bias . [ 199 ] [ 200 ] A study [ 188 ] was done to determine how stereotype threat and math identification can affect women who were majoring in a STEM related field. There were three different situations, designed to test the impact of stereotype on performance in math. One group of women were informed that men had previously out-performed women on the same calculus test they were about to take. The next group was told men and women had performed at the same level. The last group was told nothing about how men had performed and there was no mention of gender before taking their test. Out of these situations, women performed at their best scores when there was no mention of gender. The worst scores were from the situation where women were told that men had performed better than women. For women to pursue the male-dominated field of STEM, previous research shows that they must have more confidence in math/science ability. [ 185 ] Some studies propose the explanation that STEM fields (and especially fields like physics, math and philosophy) are considered by both teachers and students to require more innate talent than skills that can be learned. [ 201 ] Combined with a tendency to view women as having less of the required innate abilities, researchers proposed that this can result in assessing women as less qualified for STEM positions. In a study done by Ellis, Fosdick and Rasmussen, it was concluded that without strong skills in calculus, women cannot perform as well as their male counterparts in any field of STEM, which leads to the fewer women pursuing a career in these fields. [ 185 ] A high percentage of women that do pursue a career in STEM do not continue on this pathway after taking Calculus I, which was found to be a class that weeds out students from the STEM pathway. [ 185 ] There have been several controversial statements about innate ability and success in STEM. A few notable examples include Lawrence Summers , former president of Harvard University who suggested cognitive ability at high end positions could cause a population difference. Summers later stepped down as president. [ 202 ] Former Google engineer, James Damore, wrote a memo entitled Google's Ideological Echo Chamber suggesting that differences in trait distributions between men and women was a reason for gender imbalance in STEM. The memo stated that affirmative action to reduce the gap could discriminate against highly qualified male candidates. Damore was fired for sending out this memo. [ 203 ] A 2019 study by two Paris economists suggests that women's under-representation in STEM fields could be the result of comparative advantage , caused not by girls' 10% lower performance on math tests, but rather their far superior reading performance, which, when taken together with their math performance, results in almost one standard deviation better overall performance than boys, which is theorized to make women more likely to study humanities-related subjects than math-related ones. [ 204 ] [ 205 ] The current gender gap, however, is widely considered to be economically inefficient overall. [ 206 ] There are a multitude of factors that may explain the low representation of women in STEM careers. [ 207 ] Anne-Marie Slaughter , the first woman to hold the position of Director of Policy Planning for the United States Department of State , [ 208 ] has recently suggested some strategies to the corporate and political environment to support women to fulfill to the best of their abilities the many roles and responsibilities that they undertake. [ 209 ] The academic and research environment for women may benefit by applying some of the suggestions she has made to help women excel, while maintaining a work-life balance. A number of researchers have tested interventions to alleviate stereotype threat for women in situations where their math and science skills are being evaluated. [ 210 ] The hope is that by combating stereotype threat, these interventions will boost women's performance, encouraging a greater number of them to persist in STEM careers. One simple intervention is simply educating individuals about the existence of stereotype threat. Researchers found that women who were taught about stereotype threat and how it could negatively impact women's performance in math performed as well as men on a math test, even when stereotype threat was induced. These women also performed better than women who were not taught about stereotype threat before they took the math test. [ 211 ] One of the proposed methods for alleviating stereotype threat is through introducing role models. One study found that women who took a math test that was administered by a female experimenter did not suffer a drop in performance when compared to women whose test was administered by a male experimenter. [ 212 ] Additionally, these researchers found that it was not the physical presence of the female experimenter but rather learning about her apparent competence in math that buffered participants against stereotype threat. [ 212 ] The findings of another study suggest that role models do not necessarily have to be individuals with authority or high status, but can also be drawn from peer groups. This study found that girls in same-gender groups performed better on a task that measured math skills than girls in mixed-gender groups. [ 213 ] This was due to the fact that girls in the same-gender groups had greater access to positive role models, in the form of their female classmates who excelled in math, than girls in mixed-gender groups. [ 213 ] Similarly, another experiment showed that making groups achievements salient helped buffer women against stereotype threat. Female participants who read about successful women, even though these successes were not directly related to performance in math, performed better on a subsequent math test than participants who read about successful corporations rather than successful women. [ 214 ] A study investigating the role of textbook images on science performance found that women demonstrated better comprehension of a passage from a chemistry lesson when the text was accompanied by a counter-stereotypic image (i.e., of a female scientist) than when the text was accompanied by a stereotypic image (i.e., of a male scientist). [ 127 ] Other scholars distinguish between the challenges of both recruitment and retention in increasing women's participation in STEM fields. These researchers suggest that although both female and male role models can be effective in recruiting women to STEM fields, female role models are more effective at promoting the retention of women in these fields. [ 215 ] Female teachers can also act as role models for young girls. Reports have shown that the presence of female teachers positively influences girls' perceptions of STEM and increases their interest in STEM careers. [ 81 ] [ 216 ] Researchers have investigated the usefulness of self-affirmation in alleviating stereotype threat. One study found that women who affirmed a personal value prior to experiencing stereotype threat performed as well on a math test as men and as women who did not experience stereotype threat. [ 217 ] A subsequent study found that a short writing exercise in which college students, who were enrolled in an introductory physics course, wrote about their most important values substantially decreased the gender performance gap and boosted women's grades. [ 218 ] Scholars believe that the effectiveness of such values-affirmation exercises is their ability to help individuals view themselves as complex individuals, rather than through the lens of a harmful stereotype. Supporting this hypothesis, another study found that women who were encouraged to draw self-concept maps with many nodes did not experience a performance decrease on a math test. [ 219 ] However, women who did not draw self-concept maps or only drew maps with a few nodes did perform significantly worse than men on the math test. [ 219 ] The effect of these maps with many nodes was to remind women of their "multiple roles and identities," that were unrelated to, and would thus not be harmed by, their performance on the math test. [ 219 ] To increase women's enrollment in the STEM field, researchers believe that it should occur in elementary and middle schools. [ 220 ] Gender differences are evident by kindergarten, and many children have developed an attitude towards math and their career. [ 221 ] According to a study about high school and middle school students, there is evidence of a gender gap in science and math test scores. [ 222 ] Another method to reduce the gender gap is to create communities and opportunities apart from school. [ 223 ] For instance, creating a residential program, women's only college, and affiliation between high school and college for STEM programs will help eliminate the gender gap. [ 224 ] The research has shown that gender gap in STEM might be because of unsupportive culture that hurts woman's advancement in their career. Therefore, women all over the United States are underrepresented in tenure faculty and leadership positions. Organizations such as Girls Who Code , StemBox, [ 225 ] and Stanford's Women in Data Science Initiative aim to encourage women and girls to explore male-dominated STEM fields. Many of these organizations offer summer programs and scholarships to girls interested in STEM fields. The U.S. government has funded similar endeavors; the Department of State's Bureau of Educational and Cultural Affairs created TechGirls and TechWomen, exchange programs which teach Middle Eastern and North African girls and women skills valuable in STEM fields and encourage them to pursue STEM careers. [ 226 ] There is also the TeachHer Initiative, spearheaded by UNESCO, Costa Rican First Lady, Mercedes Peñas Domingo , and Jill Biden which aims to close the gender gap in STEAM curricula and careers. The Initiative also emphasizes the importance of after school activities and clubs for girls. [ 81 ] That's why Dell Technologies teamed up with Microsoft and Intel in 2019 to create an after-school program for young girls and underserved K-12 students across the U.S. and Canada called Girls Who Game (GWG). [ 227 ] The program uses Minecraft: Education Edition as a tool to teach the girls communication, collaboration, creativity, and critical thinking skills. Current campaigns to increase women's participation within STEM fields include the UK's GlamSci , [ 228 ] and Verizon's #InspireHerMind project. [ 229 ] [ 230 ] The US Office of Science and Technology Policy during the Obama administration collaborated with the White House Council on Women and Girls to increase the participation of women and girls within STEM fields [ 231 ] along with the "Educate to Innovate" campaign. [ 232 ] In August 2019, the University of Technology Sydney announced that women, or anyone with a long term educational disadvantage, applying to the Faculty of Engineering and Information Technology, and for a construction project management degree in the Faculty of Design, Architecture and Building, will be required to have a minimum Australian Tertiary Admission Rank that is ten points lower than that required of other students. [ 233 ] Programs such as FIRST (For Inspiration and Recognition of Science and Technology) are constantly working to eliminate the gender gap in computer science. [ 234 ] FIRST is a robotic and research platform for students from kindergarten through high school. [ 234 ] The activities and competitions in the program are usually about current STEM problems. [ 234 ] According to the report, around 13.7 percent of men and 2.6 percent of women entering college hope to major in engineering. [ 234 ] In contrast, 67 percent of men and 47 percent of women who engaged in the FIRST program tend to major in engineering. [ 234 ] Creative Resilience: Art by Women in Science is a multi-media exhibition and accompanying publication, produced in 2021 by the Gender Section of the United Nations Educational, Scientific and Cultural Organization ( UNESCO ). The project aims to give visibility to women, both professionals and university students, working in science, technology, engineering and mathematics ( STEM ). With short biographical information and graphic reproductions of their artworks dealing with the COVID-19 pandemic and accessible online, the project provides a platform for women scientists to express their experiences, insights, and creative responses to the pandemic. [ 235 ]
https://en.wikipedia.org/wiki/Women_in_STEM
Women in Science Hall of Fame was established in 2010 by the U.S. State Department Environment, Science, Technology, and Health Hub for the Middle East and North Africa to recognize the exceptional women scientists in this region of the world. [ 1 ] Annual awards were made 2011-2015 and coordinated by the U.S. Embassy in Amman, Jordan . [ 2 ] [ 3 ] [ 4 ] [ 5 ] This article related to women's history is a stub . You can help Wikipedia by expanding it . This article about a scientific organization is a stub . You can help Wikipedia by expanding it . This science awards article is a stub . You can help Wikipedia by expanding it . This United States -related article is a stub . You can help Wikipedia by expanding it . This Middle East –related article is a stub . You can help Wikipedia by expanding it . This Africa -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Women_in_Science_Hall_of_Fame_(U.S._State_Department)
Women in Scientific and Engineering Professions is a 1984 book co-edited by American authors Violet B. Haas and Carolyn C. Perrucci . It was published through University of Michigan Press . The book was reviewed in several academic journals. [ further explanation needed ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] This article about a non-fiction book on history of science is a stub . You can help Wikipedia by expanding it . This article about a book on engineering is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Women_in_Scientific_and_Engineering_Professions
Women in archaeology is an aspect of the history of archaeology and the topic of women in science more generally. In the nineteenth century women were discouraged from pursuing interests in archaeology, however throughout the twentieth century participation and recognition of expertise increased. However women in archaeology face discrimination based on their gender and many face harassment in the workplace. As a professional field of study, archaeology was initially established as an academic discipline in the nineteenth century and typically developed from people engaged in the study of antiquities. [ 1 ] [ 2 ] Prior to the Victorian era , women in Canada, the United Kingdom and the United States were rarely engaged in professional archaeology (though at this time, archaeology was not so much a profession as the practice of wealthy individuals, with workers paid to undertake the digging). [ 3 ] Participation by women in the field was discouraged, both by men and societal pressure, as the occupation masculinized the accepted view of women as homemakers and nurturers. [ 4 ] Even after they began to enter the field, the reluctance of male colleagues to accept them in fieldwork , led many women to choose roles outside of academia, seeking positions in museums or in cultural preservation associations. [ 5 ] In Europe, women often entered the discipline as research partners with their husbands or to learn about the cultures when their spouses were posted to Colonial outposts or missionary fields. [ 6 ] From the mid-1850s women's higher education facilities began offering separate courses for women and in the 1870s several European countries opened university curricula to women. [ 7 ] Though women were accepted into the study of archaeology, they were rarely considered equals and often were not admitted to prestigious societies, [ 8 ] or allowed to complete training in the field. Swedish archaeologist Hanna Rydh was one exception, [ 9 ] as was French archaeologist Madeleine Colani , [ 10 ] but more typical were the hard-fought battles of women such as Edith Hall , Harriet Boyd Hawes , Marina Picazo [ es ] , Eugénie Sellers Strong , and Blanche E. Wheeler to undertake excavation projects. [ 9 ] More typically, women such as German archaeologist Johanna Mestorf , who worked as a museum curator and academic; [ 11 ] writers such as British Egyptologist Amelia Edwards and [ 12 ] Persianist Gertrude Bell , [ 10 ] and French Persianist Jean Dieulafoy , who traveled and wrote about excavations during their travels; and women like Tessa Wheeler , who assisted her husband by compiling reports and raising money, were the pioneers of women archaeologists. [ 13 ] At the turn of the twentieth century, British women such as Eugénie Sellers Strong , who taught at the Archaeological Institute of America and British School at Rome and Margaret Murray , who lectured at University College London , began to join the ranks of university staff. [ 10 ] By the time of World War I , the majority of women working in the archaeology were employed in museums. Noted women archaeological curators or museum directors include Dane Maria Mogensen , Greek Semni Karouzou and Spaniards Concepción Blanco Mínguez and Ursicina Martínez Gallego [ 14 ] To carve out their own niches, women typically focused on research close to where they lived or from their native cultures, or undertook studies researching household items typically ignored by men. For example, Marija Gimbutas focused on Eastern European topics even after relocating to the United States; [ 15 ] Lanier Simmons, who wanted to study Maya culture, ended up researching closer to home because of family obligations; [ 16 ] and Harriet Boyd focused on domestic objects and utensils. [ 15 ] Greek Anna Apostolaki , Dane Margrethe Hald , Spaniard Felipa Niño Mas [ es ] and Swede Agnes Geijer became experts on textiles; Dane Elisabeth Munksgaard focused on clothing, [ 17 ] while Norwegian Charlotte Blindheim studied Viking costumes and jewelry. [ 18 ] Pottery and art were also topics on which women focused. [ 17 ] Prior to the 1970s, even women like Gertrude Caton-Thompson , Hilda Petrie , and Elizabeth Riefstahl , pioneers in Egyptology who had made distinguished contributions to the field, were omitted from compilations of experts working in the field. If women were mentioned at all, their roles were trivialized. [ 19 ] During the New Deal , the Works Progress Administration sponsored excavations at mound sites in Alabama, Georgia and North Carolina, which allowed women of color and working-class women to participate in archaeological work; however, class- and race-based definitions of femininity curtailed broad participation by white women, who tended to focus on participating in amateur organizations. [ 20 ] The formal conservation of archaeological objects in Western museum environments from the 1880s onwards was dominated by male scientists and technicians. However, conservation of objects in the field and in educational settings was predominantly performed by women, often the wives and relatives of male archaeologists. Similarly to female archaeologists, these expert contributions to archaeological practice were omitted from official publications and records of archaeological work undertaken. [ citation needed ] The expertise of early female conservators was then applied and refined at the Institute of Archaeology at St John's Lodge, Regents Park, from 1937 to 1959. When the Institute of Archaeology moved to Gordon Square in 1959, a conservation teaching programme was established by Ione Gedye who continued to teach at the institute from 1937 to 1975. [ 21 ] The objects treated at the Lodge formed the basis of the Institute of Archaeology collections, including the Petrie Palestine collection. [ 22 ] These collections were instrumental in establishing the Institute of Archaeology as an internationally significant centre of archaeological study. [ citation needed ] Critically analyzing the role of women in archaeology from the professionalisation of the discipline in the 19th century to the present day is a crucial task to undertake. Although there are some publications on the subject, it can be said that in general we know little about it, and that the absence of women in the histories of archaeology should lead us to reflect urgently on the way disciplinary chronicles are written. [ 23 ] Statistics show that women experience a glass ceiling in academic archaeology. Sue Hamilton , the director of the UCL Institute of Archaeology , noted that 60–70% of the institute's undergraduate and postgraduate students were women, as were the majority of its postdoctoral researchers . However, the proportion of women amongst permanent academic staff has never been more than 31%. Women are progressively further under-represented in each academic rank at the institute: 38% of lecturers are female, 41% of senior lecturers , 17% of readers , and just 11% of professors . [ 24 ] A 2016 study found a similar pattern in Australian universities. Whilst 41% of academic archaeologists were women, there was an imbalance in female representation in research fellowships (67%) compared to higher-ranked lecturing posts (31%). This study identified a "two-tiered" glass ceiling: women were less likely to obtain permanent tenure-track positions, and those that did also found it more difficult to advance to senior ranks. [ 25 ] In 1994, around 15% of the archaeologists working in the top 30 academic institutions for the field were women. [ 26 ] On the other hand, it was within academic archaeology that women first broke the glass ceiling at a number of British universities. Dorothy Garrod was the first woman to hold a chair (in any subject) at either the University of Cambridge or the University of Oxford, having been appointed Disney Professor of Archaeology at Cambridge in 1939. [ 27 ] Kathleen Kenyon was acting director of the Institute of Archaeology , University of London, during the Second World War. Rosemary Cramp was the first woman to hold a chair at the University of Durham , having been appointed Professor of Archaeology in 1971. [ 28 ] [ 29 ] In 2014, the Survey Academic Field Experiences (SAFE) surveyed nearly 700 scientists on their experiences of sexual harassment and sexual assault during fieldwork . The survey was aimed at field researchers across a range of disciplines (e.g. anthropologists , biologists ), but archaeologists constituted the largest group of respondents. The survey confirmed that sexual harassment and assault were "systemic" problems at field sites, with 64% of respondents reporting that they had personally experienced harassment and 20% that they had personally experienced sexual assault. Women, who made up the majority of the respondents (77.5%), were significantly more likely to have experienced both and were also more likely to report that such experiences were occurred "regularly" or "frequently". The targets were almost always students or early career researchers, and the perpetrators were most likely to be more senior members of the research team, although harassment and assault from peers and members of local communities were also relatively common. The experiences reported ranged from "inadvertent alienating behavior" to unwanted sexual advances, sexual assault and rape . Few respondents found that there were adequate codes of conduct or reporting procedures in place. The authors of the SAFE survey emphasised the significant negative impacts that such experiences of have on victims' job satisfaction, performance, career progression, and physical and mental health. [ 30 ]
https://en.wikipedia.org/wiki/Women_in_archaeology
This is a list of women chemists . It should include those who have been important to the development or practice of chemistry . Their research or application has made significant contributions in the area of basic or applied chemistry. Eight women have won the Nobel Prize in Chemistry (listed above), awarded annually since 1901 by the Royal Swedish Academy of Sciences . Marie Curie was the first woman to receive the prize in 1911, which was her second Nobel Prize (she also won the prize in physics in 1903, along with Pierre Curie and Henri Becquerel – making her the only woman to be award two Nobel prizes). Her prize in chemistry was for her "discovery of the elements radium and polonium, by the isolation of radium and the study of the nature and compounds of this remarkable element." Irene Joliot-Curie , Marie's daughter, became the second woman to be awarded this prize in 1935 for her discovery of artificial radioactivity. Dorothy Hodgkin won the prize in 1964 for the development of protein crystallography . Among her significant discoveries are the structures of penicillin and vitamin B12 . Forty five years later, Ada Yonath shared the prize with Venkatraman Ramakrishnan and Thomas A. Steitz for the study of the structure and function of the ribosome . Emmanuelle Charpentier and Jennifer A Doudna won the 2020 prize in chemistry “for the development of a method for genome editing.” [ 2 ] Charpentier and Doudna are the first women to share the Nobel Prize in chemistry. [ 3 ] Three women have been awarded the Wolf Prize in Chemistry , they are: In the periodic table of elements, two chemical elements are named after a female scientist: The following list is split into the centuries when the majority of the scientist's work was performed. The scientist's listed may be born and perform work outside of the century they are listed under.
https://en.wikipedia.org/wiki/Women_in_chemistry
This article discusses women who have made an important contribution to the field of physics . Five women have won the Nobel Prize in Physics , awarded annually since 1901 by the Royal Swedish Academy of Sciences . [ 1 ] These are: [ 2 ] Marie Curie was the first woman to be nominated in 1902 and to receive the prize in 1903 and shared 1/2 of the prize with her husband Pierre Curie for their joint work on radioactivity , discovered by Henri Becquerel who got the other half of the prize. Marie Curie was the first woman to also receive the Nobel Prize in Chemistry in 1911, making her the first person to win two Nobel prizes and, as of 2023, the only person to be awarded two Nobel prizes in two different scientific categories. [ 8 ] Maria Goeppert Mayer became the second woman to win the prize in 1963, for the theoretical development of the nuclear shell model , a half of the prize shared with J. Hans D. Jensen (the other half given to Eugene Wigner ). Donna Strickland shared half of the prize in 2018 with Gérard Mourou , for their work in chirped pulse amplification beginning in the 1980s (the other half given to Arthur Ashkin ). Andrea Ghez was the fourth female Nobel laureate in 2020, she shared one half of the prize with Reinhard Genzel for the discovery of the supermassive compact object Sagittarius A* at the center of our galaxy (the other half given to Roger Penrose ). In 2023, Anne L'Huillier shared the prize in equal parts with Pierre Agostini and Ferenc Krausz for their experimental contribution and development of attosecond physics . L'Huillier is the first female laureate to receive 1/3 of monetary award of the Nobel Prize in Physics (Curie, Goeppert–Mayer, Strickland and Ghez received 1/4). Physicists and physicochemists that won a Nobel Prize in Chemistry include Marie Curie, [ 9 ] Irène Joliot-Curie , daughter of Marie Curie, in 1935, [ 10 ] and Dorothy Hodgkin in 1964. [ 11 ] Nuclear physicist Rosalyn Sussman Yalow was the second female scientist to win the Nobel Prize in Physiology or Medicine in 1977 for the development of radioimmunoassays . [ 12 ] Human right activist and 2023 Nobel Peace Prize , Narges Mohammadi , was trained in nuclear physics . [ 13 ] According to the Nobel archives (updated up to 1970), other physicists that were nominated to the Nobel Prize in Physics but did not receive it, include: Irène Joliot-Curie [ 10 ] and Dorothy Hodgkin [ 11 ] were also nominated for the Nobel Prize in Physics, but received a Nobel Prize in Chemistry in 1935 and 1964, respectively. Lise Meitner is the female physicist the most nominated, 16 times for Physics and 14 times for Chemistry. [ 20 ] About 1.7% of the Nobel nominations in Physics up to 1970 were women. [ 20 ] Aside from the named above, other physicists and physicochemists that were nominated to the Nobel Prize in Chemistry but dit not receive it, include Ida Noddack , [ 21 ] Marguerite Perey , [ 22 ] Alberte Pullman , [ 23 ] and Erika Cremer . [ 24 ] Up to 1970, eight female scientists have participated as nominators for the Nobel Prize in Physics. These are Marie Curie, Hertha Sponer , Marie-Antoinette Tonnelat , Anne Barbara Underhill , Katharina Boll-Dornberger , Maria Goeppert Mayer, Dorothy Hodgkin, and Margaret Burbidge. [ 25 ] Several women have been selected as Clarivate Citation laureates in Physics , which makes an annual list of possible candidates for the Nobel Prize in Physics based on citation statistics, these include: †: deceased, no longer eligible. Two women have been awarded the Wolf Prize in Physics , awarded by the Wolf Foundation in Israel since 1978. They are: Women who have been awarded the Breakthrough Prize in Fundamental Physics since 2012, include: Female scientist have sometimes not been recognized in the naming of topics they discovered due to Matilda effect . Some physics phenomena that are named after female scientists include:
https://en.wikipedia.org/wiki/Women_in_physics
The presence of women in science spans the earliest times of the history of science wherein they have made substantial contributions. Historians with an interest in gender and science have researched the scientific endeavors and accomplishments of women, the barriers they have faced, and the strategies implemented to have their work peer-reviewed and accepted in major scientific journals and other publications. The historical, critical, and sociological study of these issues has become an academic discipline in its own right. The involvement of women in medicine occurred in several early Western civilizations , and the study of natural philosophy in ancient Greece was open to women. Women contributed to the proto-science of alchemy in the first or second centuries CE During the Middle Ages, religious convents were an important place of education for women , and some of these communities provided opportunities for women to contribute to scholarly research. The 11th century saw the emergence of the first universities ; women were, for the most part, excluded from university education. [ 1 ] Outside academia, botany was the science that benefitted most from the contributions of women in early modern times. [ 2 ] The attitude toward educating women in medical fields appears to have been more liberal in Italy than elsewhere. The first known woman to earn a university chair in a scientific field of studies was eighteenth-century Italian scientist Laura Bassi . Gender roles were largely deterministic in the eighteenth century and women made substantial advances in science . During the nineteenth century, women were excluded from most formal scientific education, but they began to be admitted into learned societies during this period. In the later nineteenth century, the rise of the women's college provided jobs for women scientists and opportunities for education. Marie Curie paved the way for scientists to study radioactive decay and discovered the elements radium and polonium . [ 3 ] Working as a physicist and chemist , she conducted pioneering research on radioactive decay and was the first woman to receive a Nobel Prize in Physics and became the first person to receive a second Nobel Prize in Chemistry . Sixty women have been awarded the Nobel Prize between 1901 and 2022. Twenty-four women have been awarded the Nobel Prize in physics, chemistry, physiology or medicine. [ 4 ] In the 1970s and 1980s, many books and articles about women scientists were appearing; virtually all of the published sources ignored women of color and women outside of Europe and North America . [ 5 ] The formation of the Kovalevskaia Fund in 1985 and the Organization for Women in Science for the Developing World in 1993 gave more visibility to previously marginalized women scientists, but even today there is a dearth of information about current and historical women in science in developing countries. According to academic Ann Hibner Koblitz : [ 6 ] Most work on women scientists has focused on the personalities and scientific subcultures of Western Europe and North America, and historians of women in science have implicitly or explicitly assumed that the observations made for those regions will hold true for the rest of the world. Koblitz has said that these generalizations about women in science often do not hold up cross-culturally: [ 7 ] A scientific or technical field that might be considered 'unwomanly' in one country in a given period may enjoy the participation of many women in a different historical period or in another country. An example is engineering, which in many countries is considered the exclusive domain of men, especially in usually prestigious subfields such as electrical or mechanical engineering. There are exceptions to this, however. In the former Soviet Union all subspecialties of engineering had high percentages of women, and at the Universidad Nacional de Ingeniería of Nicaragua, women made up 70% of engineering students in 1990. The involvement of women in the field of medicine has been recorded in several early civilizations. An ancient Egyptian physician, Peseshet ( c. 2600–2500 B.C.E. ), described in an inscription as "lady overseer of the female physicians", [ 8 ] [ 9 ] is the earliest known female physician named in the history of science . [ 10 ] Agamede was cited by Homer as a healer in ancient Greece before the Trojan War (c. 1194–1184 BCE). [ 11 ] [ 12 ] [ 13 ] According to one late antique legend, Agnodice was the first female physician to practice legally in fourth century BCE Athens . [ 14 ] The study of natural philosophy in ancient Greece was open to women. Recorded examples include Aglaonike , who predicted eclipses ; and Theano , mathematician and physician, who was a pupil (possibly also wife) of Pythagoras , and one of a school in Crotone founded by Pythagoras, which included many other women. [ 15 ] A passage in Pollux speaks about those who invented the process of coining money mentioning Pheidon and Demodike from Cyme , wife of the Phrygian king, Midas, and daughter of King Agamemnon of Cyme. [ 16 ] A daughter of a certain Agamemnon , king of Aeolian Cyme , married a Phrygian king called Midas. [ 17 ] This link may have facilitated the Greeks "borrowing" their alphabet from the Phrygians because the Phrygian letter shapes are closest to the inscriptions from Aeolis. [ 17 ] During the period of the Babylonian civilization, around 1200 BCE, two perfumeresses named Tapputi-Belatekallim and -ninu (first half of her name unknown) were able to obtain the essences from plants by using extraction and distillation procedures. [ 18 ] During the Egyptian dynasty , women were involved in applied chemistry, such as the making of beer and the preparation of medicinal compounds. [ 19 ] Women have been recorded to have made major contributions to alchemy . [ 19 ] Many of which lived in Alexandria around the 1st or 2nd centuries C.E., where the gnostic tradition led to female contributions being valued. The most famous of the women alchemist, Mary the Jewess , is credited with inventing several chemical instruments, including the double boiler ( bain-marie ); the improvement or creation of distillation equipment of that time. [ 19 ] [ 20 ] Such distillation equipment were called kerotakis (simple still) and the tribikos (a complex distillation device). [ 19 ] Hypatia of Alexandria (c. 350–415 CE), daughter of Theon of Alexandria , was a philosopher, mathematician, and astronomer. [ 21 ] [ 22 ] She is the earliest female mathematician about whom detailed information has survived. [ 22 ] Hypatia is credited with writing several important commentaries on geometry , algebra and astronomy . [ 15 ] [ 23 ] Hypatia was the head of a philosophical school and taught many students. [ 24 ] In 415 CE, she became entangled in a political dispute between Cyril , the bishop of Alexandria, and Orestes , the Roman governor, which resulted in a mob of Cyril's supporters stripping her, dismembering her, and burning the pieces of her body. [ 24 ] The early parts of the European Middle Ages , also known as the Dark Ages , were marked by the decline of the Roman Empire . The Latin West was left with great difficulties that affected the continent's intellectual production dramatically. Although nature was still seen as a system that could be comprehended in the light of reason, there was little innovative scientific inquiry. [ 25 ] The Arabic world deserves credit for preserving scientific advancements. Arabic scholars produced original scholarly work and generated copies of manuscripts from Classical periods . [ 26 ] During this period, Christianity underwent a period of resurgence, and Western civilization was bolstered as a result. This phenomenon was, in part, due to monasteries and nunneries that nurtured the skills of reading and writing, and the monks and nuns who collected and copied important writings produced by scholars of the past. [ 26 ] [ citation needed ] As it mentioned before, convents were an important place of education for women during this period, for the monasteries and nunneries encourage the skills of reading and writing, and some of these communities provided opportunities for women to contribute to scholarly research. [ 26 ] An example is the German abbess Hildegard of Bingen (1098–1179 A.D), a famous philosopher and botanist, whose prolific writings include treatments of various scientific subjects, including medicine, botany and natural history (c. 1151–58). [ 27 ] Another famous German abbess was Hroswitha of Gandersheim (935–1000 A.D.) [ 26 ] that also helped encourage women to be intellectual. However, with the growth in number and power of nunneries, the all-male clerical hierarchy was not welcomed toward it, and thus it stirred up conflict by having backlash against women's advancement. That impacted many religious orders closed on women and disbanded their nunneries, and overall excluding women from the ability to learn to read and write. With that, the world of science became closed off to women, limiting women's influence in science. [ 26 ] Entering the 11th century, the first universities emerged. Women were, for the most part, excluded from university education. [ 1 ] However, there were some exceptions. The Italian University of Bologna allowed women to attend lectures from its inception, in 1088. [ 28 ] The attitude to educating women in medical fields in Italy appears to have been more liberal than in other places. The physician, Trotula di Ruggiero , is supposed to have held a chair at the Medical School of Salerno in the 11th century, where she taught many noble Italian women, a group sometimes referred to as the " ladies of Salerno ". [ 20 ] Several influential texts on women's medicine, dealing with obstetrics and gynecology , among other topics, are also often attributed to Trotula. Dorotea Bucca was another distinguished Italian physician. She held a chair of philosophy and medicine at the University of Bologna for over forty years from 1390. [ 28 ] [ 29 ] [ self-published source? ] [ 30 ] [ 31 ] Other Italian women whose contributions in medicine have been recorded include Abella , Jacobina Félicie , Alessandra Giliani , Rebecca de Guarna , Margarita , Mercuriade (14th century), Constance Calenda , Calrice di Durisio (15th century), Constanza , Maria Incarnata and Thomasia de Mattio . [ 29 ] [ 32 ] Despite the success of some women, cultural biases affecting their education and participation in science were prominent in the Middle Ages. For example, Saint Thomas Aquinas , a Christian scholar, wrote, referring to women, "She is mentally incapable of holding a position of authority." [ 1 ] Margaret Cavendish , a seventeenth-century aristocrat, took part in some of the most important scientific debates of that time. She was, however, not inducted into the English Royal Society , although she was once allowed to attend a meeting. She wrote a number of works on scientific matters, including Observations upon Experimental Philosophy (1666) and Grounds of Natural Philosophy . In these works she was especially critical of the growing belief that humans, through science, were the masters of nature. The 1666 work attempted to heighten female interest in science. The observations provided a critique of the experimental science of Bacon and criticized microscopes as imperfect machines. [ 33 ] Isabella Cortese , an Italian alchemist, is most known for her book I secreti della signora Isabella Cortese or The Secrets of Isabella Cortese. Cortese was able to manipulate nature in order to create several medicinal, alchemy and cosmetic "secrets" or experiments. [ 34 ] Isabella's book of secrets belongs to a larger book of secrets that became extremely popular among the elite during the 16th century. Despite the low percentage of literate women during Cortese's era, the majority of alchemical and cosmetic "secrets" in the book of secrets were geared towards women. This included but was not limited to pregnancy, fertility, and childbirth. [ 34 ] Sophia Brahe , sister of Tycho Brahe, was a Danish Horticulturalist. Brahe was trained by her older brother in chemistry and horticulture but taught herself astronomy by studying books in German. Sophia visited her brother in the Uranienborg on numerous occasions and assisted on his project the De nova stella. Her observations lead to the discovery of the Supernova SN 1572 which helped refute the geocentric model of the universe. [ 35 ] Tycho Wrote the Urania Titani about his sister Sophia and her husband Erik. The Urania presented Sophia and the Titan represented Erik. Tycho used this poem in order to show his appreciation for his sister and all of her work. In Germany, the tradition of female participation in craft production enabled some women to become involved in observational science, especially astronomy . Between 1650 and 1710, women were 14% of German astronomers. [ 36 ] The most famous female astronomer in Germany was Maria Winkelmann . She was educated by her father and uncle and received training in astronomy from a nearby self-taught astronomer. Her chance to be a practising astronomer came when she married Gottfried Kirch , Prussia's foremost astronomer. She became his assistant at the astronomical observatory operated in Berlin by the Academy of Science . She made original contributions, including the discovery of a comet. When her husband died, Winkelmann applied for a position as assistant astronomer at the Berlin Academy – for which she had experience. As a woman – with no university degree – she was denied the post. Members of the Berlin Academy feared that they would establish a bad example by hiring a woman. "Mouths would gape", they said. [ 37 ] Winkelmann's problems with the Berlin Academy reflect the obstacles women faced in being accepted in scientific work, which was considered to be chiefly for men. No woman was invited to either the Royal Society of London nor the French Academy of Sciences until the twentieth century. Most people in the seventeenth century viewed a life devoted to any kind of scholarship as being at odds with the domestic duties women were expected to perform. A founder of modern botany and zoology , the German Maria Sibylla Merian (1647–1717), spent her life investigating nature. When she was thirteen, Sibylla began growing caterpillars and studying their metamorphosis into butterflies. She kept a "Study Book" which recorded her investigations into natural philosophy. In her first publication, The New Book of Flowers , she used imagery to catalog the lives of plants and insects. After her husband died, and her brief stint of living in Siewert , she and her daughter journeyed to Paramaribo for two years to observe insects, birds, reptiles, and amphibians. [ 38 ] She returned to Amsterdam and published The Metamorphosis of the Insects of Suriname , which "revealed to Europeans for the first time the astonishing diversity of the rain forest." [ 39 ] [ 40 ] She was a botanist and entomologist who was known for her artistic illustrations of plants and insects. Uncommon for that era, she traveled to South America and Surinam, where, assisted by her daughters, she illustrated the plant and animal life of those regions. [ 41 ] Overall, the Scientific Revolution did little to change people's ideas about the nature of women – more specifically – their capacity to contribute to science just as men do. According to Jackson Spielvogel , 'Male scientists used the new science to spread the view that women were by nature inferior and subordinate to men and suited to play a domestic role as nurturing mothers. The widespread distribution of books ensured the continuation of these ideas'. [ 42 ] Although women excelled in many scientific areas during the eighteenth century, they were discouraged from learning about plant reproduction. Carl Linnaeus ' system of plant classification based on sexual characteristics drew attention to botanical licentiousness, and people feared that women would learn immoral lessons from nature's example. Women were often depicted as both innately emotional and incapable of objective reasoning, or as natural mothers reproducing a natural, moral society. [ 43 ] The eighteenth century was characterized by three divergent views towards women: that women were mentally and socially inferior to men, that they were equal but different, and that women were potentially equal in both mental ability and contribution to society. [ 44 ] While individuals such as Jean-Jacques Rousseau believed women's roles were confined to motherhood and service to their male partners, the Enlightenment was a period in which women experienced expanded roles in the sciences. [ 45 ] The rise of salon culture in Europe brought philosophers and their conversation to an intimate setting where men and women met to discuss contemporary political, social, and scientific topics. [ 46 ] While Jean-Jacques Rousseau attacked women-dominated salons as producing 'effeminate men' that stifled serious discourse, salons were characterized in this era by the mixing of the sexes. [ 47 ] Lady Mary Wortley Montagu defied convention by introducing smallpox inoculation through variolation to Western medicine after witnessing it during her travels in the Ottoman Empire . [ 48 ] [ 49 ] In 1718 Wortley Montague had her son inoculated [ 49 ] and when in 1721 a smallpox epidemic struck England, she had her daughter inoculated. [ 50 ] This was the first such operation done in Britain. [ 49 ] She persuaded Caroline of Ansbach to test the treatment on prisoners. [ 50 ] Princess Caroline subsequently inoculated her two daughters in 1722. [ 49 ] Under a pseudonym, Wortley Montague published an article describing and advocating in favor of inoculation in September 1722. [ 51 ] After publicly defending forty nine theses [ 52 ] in the Palazzo Pubblico, Laura Bassi was awarded a doctorate of philosophy in 1732 at the University of Bologna . [ 53 ] Thus, Bassi became the second woman in the world to earn a philosophy doctorate after Elena Cornaro Piscopia in 1678, 54 years prior. She subsequently defended twelve additional theses at the Archiginnasio , the main building of the University of Bologna which allowed her to petition for a teaching position at the university. [ 53 ] In 1732 the university granted Bassi's professorship in philosophy, making her a member of the Academy of the Sciences and the first woman to earn a professorship in physics at a university in Europe [ 53 ] But the university held the value that women were to lead a private life and from 1746 to 1777 she gave only one formal dissertation per year ranging in topic from the problem of gravity to electricity . [ 52 ] Because she could not lecture publicly at the university regularly, she began conducting private lessons and experiments from home in the year of 1749. [ 52 ] However, due to her increase in responsibilities and public appearances on behalf of the university, Bassi was able to petition for regular pay increases, which in turn was used to pay for her advanced equipment. Bassi earned the highest salary paid by the University of Bologna of 1,200 lire. [ 54 ] In 1776, at the age of 65, she was appointed to the chair in experimental physics by the Bologna Institute of Sciences with her husband as a teaching assistant. [ 52 ] According to Britannica, Maria Gaetana Agnesi is "considered to be the first woman in the Western world to have achieved a reputation in mathematics." [ 55 ] She is credited as the first woman to write a mathematics handbook, the Instituzioni analitiche ad uso della gioventù italiana , (Analytical Institutions for the Use of Italian Youth). Published in 1748 it "was regarded as the best introduction extant to the works of Euler ." [ 56 ] [ 57 ] The goal of this work was, according to Agnesi herself, to give a systematic illustration of the different results and theorems of infinitesimal calculus . [ 58 ] In 1750 she became the second woman to be granted a professorship at a European university. Also appointed to the University of Bologna she never taught there. [ 56 ] [ 59 ] The German Dorothea Erxleben was instructed in medicine by her father from an early age [ 60 ] and Bassi's university professorship inspired Erxleben to fight for her right to practise medicine . In 1742 she published a tract arguing that women should be allowed to attend university. [ 61 ] After being admitted to study by a dispensation of Frederick the Great , [ 60 ] Erxleben received her M.D. from the University of Halle in 1754. [ 61 ] She went on to analyse the obstacles preventing women from studying, among them housekeeping and children. [ 60 ] She became the first female medical doctor in Germany . [ 62 ] In 1741–42 Charlotta Frölich became the first woman to be published by the Royal Swedish Academy of Sciences with three books in agricultural science. In 1748 Eva Ekeblad became the first woman inducted into that academy. [ 63 ] In 1746 Ekeblad had written to the academy about her discoveries of how to make flour and alcohol out of potatoes . [ 64 ] [ 65 ] Potatoes had been introduced into Sweden in 1658 but had been cultivated only in the greenhouses of the aristocracy. Ekeblad's work turned potatoes into a staple food in Sweden, and increased the supply of wheat , rye and barley available for making bread, since potatoes could be used instead to make alcohol. This greatly improved the country's eating habits and reduced the frequency of famines. [ 65 ] Ekeblad also discovered a method of bleaching cotton textile and yarn with soap in 1751, [ 64 ] and of replacing the dangerous ingredients in cosmetics of the time by using potato flour in 1752. [ 65 ] Émilie du Châtelet , a close friend of Voltaire , was the first scientist to appreciate the significance of kinetic energy , as opposed to momentum . She repeated and described the importance of an experiment originally devised by Willem 's Gravesande showing the impact of falling objects is proportional not to their velocity, but to the velocity squared. This understanding is considered to have made a profound contribution to Newtonian mechanics . [ 66 ] In 1749 she completed the French translation of Newton's Philosophiae Naturalis Principia Mathematica (the Principia ), including her derivation of the notion of conservation of energy from its principles of mechanics. Published ten years after her death, her translation and commentary of the Principia contributed to the completion of the scientific revolution in France and to its acceptance in Europe. [ 67 ] Marie-Anne Pierrette Paulze and her husband Antoine Lavoisier rebuilt the field of chemistry , which had its roots in alchemy and at the time was a convoluted science dominated by George Stahl 's theory of phlogiston . Paulze accompanied Lavoisier in his lab, making entries into lab notebooks and sketching diagrams of his experimental designs. The training she had received allowed her to accurately and precisely draw experimental apparatuses, which ultimately helped many of Lavoisier's contemporaries to understand his methods and results. Paulze translated various works about phlogiston into French. One of her most important translation was that of Richard Kirwan 's Essay on Phlogiston and the Constitution of Acids , which she both translated and critiqued, adding footnotes as she went along and pointing out errors in the chemistry made throughout the paper. [ 68 ] Paulze was instrumental in the 1789 publication of Lavoisier's Elementary Treatise on Chemistry , which presented a unified view of chemistry as a field. This work proved pivotal in the progression of chemistry, as it presented the idea of conservation of mass as well as a list of elements and a new system for chemical nomenclature . She also kept strict records of the procedures followed, lending validity to the findings Lavoisier published. The astronomer Caroline Herschel was born in Hanover but moved to England where she acted as an assistant to her brother, William Herschel . Throughout her writings, she repeatedly made it clear that she desired to earn an independent wage and be able to support herself. When the crown began paying her for her assistance to her brother in 1787, she became the first woman to do so at a time when even men rarely received wages for scientific enterprises – to receive a salary for services to science. [ 69 ] During 1786–97 she discovered eight comets , the first on 1 August 1786. She had unquestioned priority as discoverer of five of the comets [ 69 ] [ 70 ] and rediscovered Comet Encke in 1795. [ 71 ] Five of her comets were published in Philosophical Transactions , a packet of paper bearing the superscription, "This is what I call the Bills and Receipts of my Comets" contains some data connected with the discovery of each of these objects. William was summoned to Windsor Castle to demonstrate Caroline's comet to the royal family . [ 72 ] Caroline Herschel is often credited as the first woman to discover a comet; however, Maria Kirch discovered a comet in the early 1700s, but is often overlooked because at the time, the discovery was attributed to her husband, Gottfried Kirch . [ 73 ] Science remained a largely amateur profession during the early part of the nineteenth century. Botany was considered a popular and fashionable activity, and one particularly suitable to women. In the later eighteenth and early nineteenth centuries, it was one of the most accessible areas of science for women in both England and North America. [ 74 ] [ 75 ] [ 76 ] However, as the nineteenth century progressed, botany and other sciences became increasingly professionalized, and women were increasingly excluded. Women's contributions were limited by their exclusion from most formal scientific education, but began to be recognized through their occasional admittance into learned societies during this period. [ 76 ] [ 74 ] Scottish scientist Mary Fairfax Somerville carried out experiments in magnetism , presenting a paper entitled 'The Magnetic Properties of the Violet Rays of the Solar Spectrum' to the Royal Society in 1826, the second woman to do so. She also wrote several mathematical , astronomical , physical and geographical texts, and was a strong advocate for women's education . In 1835, she and Caroline Herschel were the first two women elected as Honorary Members of the Royal Astronomical Society . [ 77 ] English mathematician Ada, Lady Lovelace , a pupil of Somerville, corresponded with Charles Babbage about applications for his analytical engine . In her notes (1842–43) appended to her translation of Luigi Menabrea 's article on the engine, she foresaw wide applications for it as a general-purpose computer, including composing music. She has been credited as writing the first computer program, though this has been disputed. [ 78 ] In Germany, institutes for "higher" education of women ( Höhere Mädchenschule , in some regions called Lyzeum ) were founded at the beginning of the century. [ 79 ] The Deaconess Institute at Kaiserswerth was established in 1836 to instruct women in nursing . Elizabeth Fry visited the institute in 1840 and was inspired to found the London Institute of Nursing, and Florence Nightingale studied there in 1851. [ 80 ] In the US, Maria Mitchell made her name by discovering a comet in 1847, but also contributed calculations to the Nautical Almanac produced by the United States Naval Observatory . She became the first woman member of the American Academy of Arts and Sciences in 1848 and of the American Association for the Advancement of Science in 1850. Other notable female scientists during this period include: [ 15 ] The latter part of the 19th century saw a rise in educational opportunities for women. Schools aiming to provide education for girls similar to that afforded to boys were founded in the UK, including the North London Collegiate School (1850), Cheltenham Ladies' College (1853) and the Girls' Public Day School Trust schools (from 1872). The first UK women's university college, Girton , was founded in 1869, and others soon followed: Newnham (1871) and Somerville (1879). The Crimean War (1854–1856) contributed to establishing nursing as a profession, making Florence Nightingale a household name. A public subscription allowed Nightingale to establish a school of nursing in London in 1860, and schools following her principles were established throughout the UK. [ 80 ] Nightingale was also a pioneer in public health as well as a statistician. James Barry became the first British woman to gain a medical qualification in 1812, passing as a man. Elizabeth Garrett Anderson was the first openly female Briton to qualify medically, in 1865. With Sophia Jex-Blake , American Elizabeth Blackwell and others, Garret Anderson founded the first UK medical school to train women, the London School of Medicine for Women , in 1874. Annie Scott Dill Maunder was a pioneer in astronomical photography , especially of sunspots . A mathematics graduate of Girton College , Cambridge, she was first hired (in 1890) to be an assistant to Edward Walter Maunder , discoverer of the Maunder Minimum , the head of the solar department at Greenwich Observatory . They worked together to observe sunspots and to refine the techniques of solar photography. They married in 1895. Annie's mathematical skills made it possible to analyse the years of sunspot data that Maunder had been collecting at Greenwich. She also designed a small, portable wide-angle camera with a 1.5-inch-diameter (38 mm) lens. In 1898, the Maunders traveled to India, where Annie took the first photographs of the Sun's corona during a solar eclipse. By analysing the Cambridge records for both sunspots and geomagnetic storm , they were able to show that specific regions of the Sun's surface were the source of geomagnetic storms and that the Sun did not radiate its energy uniformly into space, as William Thomson, 1st Baron Kelvin had declared. [ 81 ] In Prussia women could go to university from 1894 and were allowed to receive a PhD. In 1908 all remaining restrictions for women were terminated. Alphonse Rebière published a book in 1897, in France, entitled Les Femmes dans la science (Women in Science) which listed the contributions and publications of women in science. [ 82 ] Other notable female scientists during this period include: [ 15 ] [ 83 ] In the second half of the 19th century, a large proportion of the most successful women in the STEM fields were Russians. Although many women received advanced training in medicine in the 1870s, [ 84 ] in other fields women were barred and had to go to western Europe – mainly Switzerland – in order to pursue scientific studies. In her book about these "women of the [eighteen] sixties" (шестидесятницы), as they were called, Ann Hibner Koblitz writes: [ 85 ] : 11 To a large extent, women's higher education in continental Europe was pioneered by this first generation of Russian women. They were the first students in Zürich, Heidelberg, Leipzig, and elsewhere. Theirs were the first doctorates in medicine, chemistry, mathematics, and biology. Among the successful scientists were Nadezhda Suslova (1843–1918), the first woman in the world to obtain a medical doctorate fully equivalent to men's degrees; Maria Bokova-Sechenova (1839–1929), a pioneer of women's medical education who received two doctoral degrees, one in medicine in Zürich and one in physiology in Vienna; Iulia Lermontova (1846–1919), the first woman in the world to receive a doctoral degree in chemistry; the marine biologist Sofia Pereiaslavtseva (1849–1903), director of the Sevastopol Biological Station and winner of the Kessler Prize of the Russian Society of Natural Scientists; and the mathematician Sofia Kovalevskaia (1850–1891), the first woman in 19th century Europe to receive a doctorate in mathematics and the first to become a university professor in any field. [ 85 ] In the later nineteenth century the rise of the women's college provided jobs for women scientists, and opportunities for education. Women's colleges produced a disproportionate number of women who went on for PhDs in science. Many coeducational colleges and universities also opened or started to admit women during this period; such institutions included just over 3000 women in 1875, by 1900 numbered almost 20,000. [ 83 ] An example is Elizabeth Blackwell , who became the first certified female doctor in the US when she graduated from Geneva Medical College in 1849. [ 86 ] With her sister, Emily Blackwell , and Marie Zakrzewska , Blackwell founded the New York Infirmary for Women and Children in 1857 and the first women's medical college in 1868, providing both training and clinical experience for women doctors. She also published several books on medical education for women. In 1876, Elizabeth Bragg became the first woman to graduate with a civil engineering degree in the United States, from the University of California, Berkeley . [ 87 ] Marie Skłodowska-Curie , the first woman to win a Nobel prize in 1903 (physics), went on to become a double Nobel prize winner in 1911, both for her work on radiation . She was the first person to win two Nobel prizes, a feat accomplished by only three others since then. She also was the first woman to teach at Sorbonne University in Paris . [ 88 ] Alice Perry is understood to be the first woman to graduate with a degree in civil engineering in the then United Kingdom of Great Britain and Ireland , in 1906 at Queen's College, Galway, Ireland . [ 89 ] Lise Meitner played a major role in the discovery of nuclear fission. As head of the physics section at the Kaiser Wilhelm Institute in Berlin she collaborated closely with the head of chemistry Otto Hahn on atomic physics until forced to flee Berlin in 1938. In 1939, in collaboration with her nephew Otto Frisch , Meitner derived the theoretical explanation for an experiment performed by Hahn and Fritz Strassman in Berlin, thereby demonstrating the occurrence of nuclear fission . The possibility that Fermi's bombardment of uranium with neutrons in 1934 had instead produced fission by breaking up the nucleus into lighter elements, had actually first been raised in print in 1934, by chemist Ida Noddack (co-discover of the element rhenium ), but this suggestion had been ignored at the time, as no group made a concerted effort to find any of these light radioactive fission products. Maria Montessori was the first woman in Southern Europe to qualify as a physician. [ 90 ] She developed an interest in the diseases of children and believed in the necessity of educating those recognized to be ineducable. In the case of the latter she argued for the development of training for teachers along Froebelian lines and developed the principle that was also to inform her general educational program , which is the first the education of the senses, then the education of the intellect. Montessori introduced a teaching program that allowed defective children to read and write. She sought to teach skills not by having children repeatedly try it, but by developing exercises that prepare them. [ 91 ] Emmy Noether revolutionized abstract algebra, filled in gaps in relativity, and was responsible for a critical theorem about conserved quantities in physics. One notes that the Erlangen program attempted to identify invariants under a group of transformations. On 16 July 1918, before a scientific organization in Göttingen , Felix Klein read a paper written by Emmy Noether , because she was not allowed to present the paper herself. In particular, in what is referred to in physics as Noether's theorem , this paper identified the conditions under which the Poincaré group of transformations (now called a gauge group ) for general relativity defines conservation laws . [ 92 ] Noether's papers made the requirements for the conservation laws precise. Among mathematicians, Noether is best known for her fundamental contributions to abstract algebra, where the adjective noetherian is nowadays commonly used on many sorts of objects. Mary Cartwright was a British mathematician who was the first to analyze a dynamical system with chaos. [ 93 ] Inge Lehmann , a Danish seismologist , first suggested in 1936 that inside the Earth's molten core there may be a solid inner core . [ 94 ] Women such as Margaret Fountaine continued to contribute detailed observations and illustrations in botany, entomology, and related observational fields. Joan Beauchamp Procter , an outstanding herpetologist , was the first woman Curator of Reptiles for the Zoological Society of London at London Zoo . Florence Sabin was an American medical scientist. Sabin was the first woman faculty member at Johns Hopkins in 1902, and the first woman full-time professor there in 1917. [ 95 ] Her scientific and research experience is notable. Sabin published over 100 scientific papers and multiple books. [ 95 ] Women moved into science in significant numbers by 1900, helped by the women's colleges and by opportunities at some of the new universities. Margaret Rossiter 's books Women Scientists in America: Struggles and Strategies to 1940 and Women Scientists in America: Before Affirmative Action 1940–1972 provide an overview of this period, stressing the opportunities women found in separate women's work in science. [ 96 ] [ 97 ] In 1892, Ellen Swallow Richards called for the "christening of a new science" – " oekology " (ecology) in a Boston lecture. This new science included the study of "consumer nutrition" and environmental education. This interdisciplinary branch of science was later specialized into what is currently known as ecology, while the consumer nutrition focus split off and was eventually relabeled as home economics , [ 98 ] [ 99 ] which provided another avenue for women to study science. Richards helped to form the American Home Economics Association , which published a journal, the Journal of Home Economics , and hosted conferences. Home economics departments were formed at many colleges, especially at land grant institutions. In her work at MIT, Ellen Richards also introduced the first biology course in its history as well as the focus area of sanitary engineering. Women also found opportunities in botany and embryology . In psychology , women earned doctorates but were encouraged to specialize in educational and child psychology and to take jobs in clinical settings, such as hospitals and social welfare agencies. In 1901, Annie Jump Cannon first noticed that it was a star's temperature that was the principal distinguishing feature among different spectra. [ dubious – discuss ] This led to re-ordering of the ABC types by temperature instead of hydrogen absorption-line strength. Due to Cannon's work, most of the then-existing classes of stars were thrown out as redundant. Afterward, astronomy was left with the seven primary classes recognized today, in order: O, B, A, F, G, K, M; [ 100 ] that has since been extended. Henrietta Swan Leavitt first published her study of variable stars in 1908. This discovery became known as the "period-luminosity relationship" of Cepheid variables . [ 102 ] Our picture of the universe was changed forever, largely because of Leavitt's discovery. The accomplishments of Edwin Hubble , renowned American astronomer, were made possible by Leavitt's groundbreaking research and Leavitt's Law. "If Henrietta Leavitt had provided the key to determine the size of the cosmos, then it was Edwin Powell Hubble who inserted it in the lock and provided the observations that allowed it to be turned", wrote David H. and Matthew D.H. Clark in their book Measuring the Cosmos . [ 103 ] Hubble often said that Leavitt deserved the Nobel for her work. [ 104 ] Gösta Mittag-Leffler of the Swedish Academy of Sciences had begun paperwork on her nomination in 1924, only to learn that she had died of cancer three years earlier [ 105 ] (the Nobel prize cannot be awarded posthumously). In 1925, Harvard graduate student Cecilia Payne-Gaposchkin demonstrated for the first time from existing evidence on the spectra of stars that stars were made up almost exclusively of hydrogen and helium , one of the most fundamental theories in stellar astrophysics . [ 100 ] [ 102 ] Canadian-born Maud Menten worked in the US and Germany. Her most famous work was on enzyme kinetics together with Leonor Michaelis , based on earlier findings of Victor Henri . This resulted in the Michaelis–Menten equations. Menten also invented the azo-dye coupling reaction for alkaline phosphatase , which is still used in histochemistry. She characterised bacterial toxins from B. paratyphosus , Streptococcus scarlatina and Salmonella ssp. , and conducted the first electrophoretic separation of proteins in 1944. She worked on the properties of hemoglobin , regulation of blood sugar level, and kidney function. World War II brought some new opportunities. The Office of Scientific Research and Development , under Vannevar Bush , began in 1941 to keep a registry of men and women trained in the sciences. Because there was a shortage of workers, some women were able to work in jobs they might not otherwise have accessed. Many women worked on the Manhattan Project or on scientific projects for the United States military services. Women who worked on the Manhattan Project included Leona Woods Marshall, Katharine Way , and Chien-Shiung Wu . It was actually Wu who confirmed Enrico Fermi's hypothesis through her earlier draft that Xe-135 impeded the B reactor from working. The adjustments made would quickly let the project resume its course. [ 106 ] [ 107 ] Wu would later also confirm Albert Einstein's EPR Paradox in the first experimental corroboration, and prove the first violation of Parity and Charge Conjugate Symmetry , thereby laying the conceptual basis for the future Standard model of Particle Physics , and the rapid development of the new field. [ 108 ] Women in other disciplines looked for ways to apply their expertise to the war effort. Three nutritionists, Lydia J. Roberts , Hazel K. Stiebeling , and Helen S. Mitchell , developed the Recommended Dietary Allowance in 1941 to help military and civilian groups make plans for group feeding situations. The RDAs proved necessary, especially, once foods began to be rationed . Rachel Carson worked for the United States Bureau of Fisheries , writing brochures to encourage Americans to consume a wider variety of fish and seafood. She also contributed to research to assist the Navy in developing techniques and equipment for submarine detection. Women in psychology formed the National Council of Women Psychologists , which organized projects related to the war effort. The NCWP elected Florence Laura Goodenough president. In the social sciences, several women contributed to the Japanese Evacuation and Resettlement Study , based at the University of California . This study was led by sociologist Dorothy Swaine Thomas , who directed the project and synthesized information from her informants, mostly graduate students in anthropology. These included Tamie Tsuchiyama , the only Japanese-American woman to contribute to the study, and Rosalie Hankey Wax . In the United States Navy , female scientists conducted a wide range of research. Mary Sears , a planktonologist , researched military oceanographic techniques as head of the Hydgrographic Office's Oceanographic Unit. Florence van Straten , a chemist, worked as an aerological engineer. She studied the effects of weather on military combat. Grace Hopper , a mathematician, became one of the first computer programmers for the Mark I computer. Mina Spiegel Rees , also a mathematician, was the chief technical aide for the Applied Mathematics Panel of the National Defense Research Committee . Gerty Cori was a biochemist who discovered the mechanism by which glycogen, a derivative of glucose, is transformed in the muscles to form lactic acid, and is later reformed as a way to store energy. For this discovery she and her colleagues were awarded the Nobel prize in 1947, making her the third woman and the first American woman to win a Nobel Prize in science. She was the first woman ever to be awarded the Nobel Prize in Physiology or Medicine. Cori is among several scientists whose works are commemorated by a U.S. postage stamp. [ 109 ] Nina Byers notes that before 1976, fundamental contributions of women to physics were rarely acknowledged. Women worked unpaid or in positions lacking the status they deserved. That imbalance is gradually being redressed. [ citation needed ] In the early 1980s, Margaret Rossiter presented two concepts for understanding the statistics behind women in science as well as the disadvantages women continued to suffer. She coined the terms "hierarchical segregation" and "territorial segregation." The former term describes the phenomenon in which the further one goes up the chain of command in the field, the smaller the presence of women. The latter describes the phenomenon in which women "cluster in scientific disciplines." [ 110 ] : 33–34 A recent book titled Athena Unbound provides a life-course analysis (based on interviews and surveys) of women in science from early childhood interest, through university, graduate school and the academic workplace. The thesis of this book is that "Women face a special series of gender related barriers to entry and success in scientific careers that persist, despite recent advances". [ 111 ] The L'Oréal-UNESCO Awards for Women in Science were set up in 1998, with prizes alternating each year between the materials science and life sciences. One award is given for each geographical region of Africa and the Middle East, Asia-Pacific, Europe, Latin America and the Caribbean, and North America. By 2017, these awards had recognised almost 100 laureates from 30 countries. Two of the laureates have gone on to win the Nobel Prize, Ada Yonath (2008) and Elizabeth Blackburn (2009). Fifteen promising young researchers also receive an International Rising Talent fellowship each year within this programme. South-African born physicist and radiobiologist Tikvah Alper (1909–95), working in the UK, developed many fundamental insights into biological mechanisms, including the (negative) discovery that the infective agent in scrapie could not be a virus or other eukaryotic structure. French virologist Françoise Barré-Sinoussi performed some of the fundamental work in the identification of the human immunodeficiency virus (HIV) as the cause of AIDS, for which she shared the 2008 Nobel Prize in Physiology or Medicine. In July 1967, Jocelyn Bell Burnell discovered evidence for the first known radio pulsar , which resulted in the 1974 Nobel Prize in Physics for her supervisor . She was president of the Institute of Physics from October 2008 until October 2010. Astrophysicist Margaret Burbidge was a member of the B 2 FH group responsible for originating the theory of stellar nucleosynthesis, which explains how elements are formed in stars. She has held a number of prestigious posts, including the directorship of the Royal Greenwich Observatory . Mary Cartwright was a mathematician and student of G. H. Hardy . Her work on nonlinear differential equations was influential in the field of dynamical systems . Rosalind Franklin was a crystallographer, whose work helped to elucidate the fine structures of coal, graphite , DNA and viruses. In 1953, the work she did on DNA allowed Watson and Crick to conceive their model of the structure of DNA. Her photograph of DNA gave Watson and Crick a basis for their DNA research, and they were awarded the Nobel Prize without giving due credit to Franklin, who had died of cancer in 1958. Jane Goodall is a British primatologist considered to be the world's foremost expert on chimpanzees and is best known for her over 55-year study of social and family interactions of wild chimpanzees. She is the founder of the Jane Goodall Institute and the Roots & Shoots programme. Dorothy Hodgkin analyzed the molecular structure of complex chemicals by studying diffraction patterns caused by passing X-rays through crystals. She won the 1964 Nobel prize for chemistry for discovering the structure of vitamin B 12 , becoming the third woman to win the prize for chemistry. [ 112 ] Irène Joliot-Curie , daughter of Marie Curie, won the 1935 Nobel Prize for chemistry with her husband Frédéric Joliot for their work in radioactive isotopes leading to nuclear fission . This made the Curies the family with the most Nobel laureates to date. Palaeoanthropologist Mary Leakey discovered the first skull of a fossil ape on Rusinga Island and also a noted robust Australopithecine. Italian neurologist Rita Levi-Montalcini received the 1986 Nobel Prize in Physiology or Medicine for the discovery of Nerve growth factor (NGF). Her work allowed for a further potential understanding of different diseases such as tumors, delayed healing, malformations, and others. [ 113 ] This research led to her winning the Nobel Prize for Physiology or Medicine alongside Stanley Cohen in 1986. While making advancements in medicine and science, Rita Levi-Montalcini was also active politically throughout her life. [ 114 ] She was appointed a Senator for Life in the Italian Senate in 2001 and is the oldest Nobel laureate ever to have lived. Zoologist Anne McLaren conducted studied in genetics which led to advances in in vitro fertilization . She became the first female officer of the Royal Society in 331 years. Christiane Nüsslein-Volhard received the Nobel Prize in Physiology or Medicine in 1995 for research on the genetic control of embryonic development. She also started the Christiane Nüsslein-Volhard Foundation (Christiane Nüsslein-Volhard Stiftung), to aid promising young female German scientists with children. Bertha Swirles was a theoretical physicist who made a number of contributions to early quantum theory . She co-authored the well-known textbook Methods of Mathematical Physics with her husband Sir Harold Jeffreys . Kay McNulty , Betty Jennings , Betty Snyder , Marlyn Wescoff , Fran Bilas and Ruth Lichterman were six of the original programmers for the ENIAC , the first general purpose electronic computer. [ 115 ] Linda B. Buck is a neurobiologist who was awarded the 2004 Nobel Prize in Physiology or Medicine along with Richard Axel for their work on olfactory receptors . Rachel Carson was a marine biologist from the United States. She is credited with being the founder of the environmental movement. [ 116 ] The biologist and activist published Silent Spring , a work on the dangers of pesticides, in 1962. The publishing of her environmental science book led to the questioning of usage of harmful pesticides and other chemicals in agricultural settings. [ 116 ] This led to a campaign to attempt to ultimately discredit Carson. However, the federal government called for a review of DDT which concluded with DDT being banned. [ 117 ] Carson later died from cancer in 1964 at 57 years old. [ 117 ] Eugenie Clark , popularly known as The Shark Lady, was an American ichthyologist known for her research on poisonous fish of the tropical seas and on the behavior of sharks. [ 118 ] Ann Druyan is an American writer, lecturer and producer specializing in cosmology and popular science . Druyan has credited her knowledge of science to the 20 years she spent studying with her late husband, Carl Sagan , rather than formal academic training. [ citation needed ] She was responsible for the selection of music on the Voyager Golden Record for the Voyager 1 and Voyager 2 exploratory missions. Druyan also sponsored the Cosmos 1 spacecraft. Gertrude B. Elion was an American biochemist and pharmacologist, awarded the Nobel Prize in Physiology or Medicine in 1988 for her work on the differences in biochemistry between normal human cells and pathogens. Sandra Moore Faber , with Robert Jackson , discovered the Faber–Jackson relation between luminosity and stellar dispersion velocity in elliptical galaxies . She also headed the team which discovered the Great Attractor , a large concentration of mass which is pulling a number of nearby galaxies in its direction. Zoologist Dian Fossey worked with gorillas in Africa from 1967 until her murder in 1985. Astronomer Andrea Ghez received a MacArthur "genius grant" in 2008 for her work in surmounting the limitations of earthbound telescopes. [ 119 ] Maria Goeppert Mayer was the second female Nobel Prize winner in Physics, for proposing the nuclear shell model of the atomic nucleus. Earlier in her career, she had worked in unofficial or volunteer positions at the university where her husband was a professor. Goeppert Mayer is one of several scientists whose works are commemorated by a U.S. postage stamp. [ 120 ] Sulamith Low Goldhaber and her husband Gerson Goldhaber formed a research team on the K meson and other high-energy particles in the 1950s. Carol Greider and the Australian born Elizabeth Blackburn , along with Jack W. Szostak, received the 2009 Nobel Prize in Physiology or Medicine for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase. Rear Admiral Grace Murray Hopper developed the first computer compiler while working for the Eckert Mauchly Computer Corporation , released in 1952. Deborah S. Jin 's team at JILA , in Boulder, Colorado , in 2003 produced the first fermionic condensate , a new state of matter . Stephanie Kwolek , a researcher at DuPont, invented poly-paraphenylene terephthalamide – better known as Kevlar . Lynn Margulis is a biologist best known for her work on endosymbiotic theory , which is now generally accepted for how certain organelles were formed. Barbara McClintock 's studies of maize genetics demonstrated genetic transposition in the 1940s and 1950s. Before then, McClintock obtained her PhD from Cornell University in 1927. Her discovery of transposition provided a greater understanding of mobile loci within chromosomes and the ability for genetics to be fluid. [ 121 ] She dedicated her life to her research, and she was awarded the Nobel Prize in Physiology or Medicine in 1983. McClintock was the first American woman to receive a Nobel Prize that was not shared by anyone else. [ 121 ] McClintock is one of several scientists whose works are commemorated by a U.S. postage stamp. [ 122 ] Nita Ahuja is a renowned surgeon-scientist known for her work on CIMP in cancer, she is currently the Chief of surgical oncology at Johns Hopkins Hospital. First woman ever to be the Chief of this prestigious department. Carolyn Porco is a planetary scientist best known for her work on the Voyager program and the Cassini–Huygens mission to Saturn . She is also known for her popularization of science, in particular space exploration. Physicist Helen Quinn , with Roberto Peccei , postulated Peccei-Quinn symmetry . One consequence is a particle known as the axion , a candidate for the dark matter that pervades the universe. Quinn was the first woman to receive the Dirac Medal by the International Centre for Theoretical Physics (ICTP) and the first to receive the Oskar Klein Medal . Lisa Randall is a theoretical physicist and cosmologist, best known for her work on the Randall–Sundrum model . She was the first tenured female physics professor at Princeton University . Sally Ride was an astrophysicist and the first American woman, and then-youngest American, to travel to outer space. Ride wrote or co-wrote several books on space aimed at children, with the goal of encouraging them to study science. [ 123 ] [ 124 ] Ride participated in the Gravity Probe B (GP-B) project, which provided more evidence that the predictions of Albert Einstein 's general theory of relativity are correct. [ 125 ] Through her observations of galaxy rotation curves, astronomer Vera Rubin discovered the Galaxy rotation problem , now taken to be one of the key pieces of evidence for the existence of dark matter . She was the first female allowed to observe at the Palomar Observatory . Sara Seager is a Canadian-American astronomer who is currently a professor at the Massachusetts Institute of Technology and known for her work on extrasolar planets. Astronomer Jill Tarter is best known for her work on the search for extraterrestrial intelligence. Tarter was named one of the 100 most influential people in the world by Time Magazine in 2004. [ 126 ] She is the former director of SETI . [ 127 ] Rosalyn Yalow was the co-winner of the 1977 Nobel Prize in Physiology or Medicine (together with Roger Guillemin and Andrew Schally) for development of the radioimmunoassay (RIA) technique. Latin America Maria Nieves Garcia-Casal , the first scientist and nutritionist woman from Latin America to lead the Latin America Society of Nutrition. Angela Restrepo Moreno is a microbiologist from Colombia. She first gained interest in tiny organisms when she had the opportunity to view them through a microscope that belonged to her grandfather. [ 128 ] While Restrepo has a variety of research, her main area of research is fungi and their causes of diseases. [ 128 ] Her work led her to develop research on a disease caused by fungi that has only been diagnosed in Latin America but was originally found in Brazil: Paracoccidioidomycosis . [ 128 ] Research groups also developed by Restrepo have begun studying two routes: the relationship between humans, fungi, and the environment and also how the cells within the fungi work. [ 128 ] Along with her research, Restrepo co-founded a non-profit that is devoted to scientific research named Corporation for Biological Research (CIB). [ 128 ] Angela Restrepo Moreno was awarded the SCOPUS Prize in 2007 for her numerous publications. [ 128 ] She currently resides in Colombia and continues her research. Susana López Charretón was born in Mexico City, Mexico in 1957. She is a virologist whose area of study focused on the rotavirus . [ 129 ] When she initially began studying rotavirus, it had only been discovered four years earlier. [ 129 ] Charretón's main job was to study how the virus entered cells and its ways of multiplying. [ 129 ] Because of her, and several others, work other scientists were able to learn about more details of the virus. [ 129 ] Now, her research focuses on the virus's ability to recognize the cells it infects. [ 129 ] Along with her husband, Charretón was awarded the Carlos J. Finlay Prize for Microbiology in 2001. [ 129 ] She also received the Loreal-UNESCO prize titled "Woman in Science" in 2012. [ 129 ] Charretón has also received several other awards for her research. Liliana Quintanar Vera is a Mexican chemist. Currently a researcher at the Department of Chemistry of the Center of Investigation and Advanced Studies, Vera's research currently focuses on neurodegenerative diseases like Parkinson's, Alzheimer's, and prion disease and also on degenerative diseases like diabetes and cataracts. [ 130 ] For this research she focused on how copper interacts with the proteins of the neurodegenerative diseases mentioned before. [ 131 ] Liliana's awards include the Mexican Academy of Sciences Research Prize for Science in 2017, the Marcos Moshinsky Chair award in 2016, the Fulbright Scholarship in 2014, and the L'Oréal-UNESCO For Women in Science Award in 2007. [ 130 ] The Nobel Prize and Prize in Economic Sciences have been awarded to women 61 times between 1901 and 2022. One woman, Marie Sklodowska-Curie, has been honored twice, with the 1903 Nobel Prize in Physics and the 1911 Nobel Prize in Chemistry. This means that 60 women in total have been awarded the Nobel Prize between 1901 and 2022. 25 women have been awarded the Nobel Prize in physics, chemistry, physiology or medicine. [ 4 ] Statistics are used to indicate disadvantages faced by women in science, and also to track positive changes of employment opportunities and incomes for women in science. [ 110 ] : 33 Women appear to do less well than men (in terms of degree, rank, and salary) in the fields that have been traditionally dominated by women, such as nursing . In 1991 women attributed 91% of the PhDs in nursing, and men held 4% of full professorships in nursing. [ citation needed ] In the field of psychology , where women earn the majority of PhDs, women do not fill the majority of high rank positions in that field. [ 133 ] [ citation needed ] Women's lower salaries in the scientific community are also reflected in statistics. According to the data provided in 1993, the median salaries of female scientists and engineers with doctoral degrees were 20% less than men. [ 110 ] : 35 [ needs update ] This data can be explained [ who? ] as there was less participation of women in high rank scientific fields/positions and a female majority in low-paid fields/positions. However, even with men and women in the same scientific community field, women are typically paid 15–17% less than men. [ citation needed ] In addition to the gender gap , there were also salary differences between ethnicity: African-American women with more years of experiences earn 3.4% less than European-American women with similar skills, while Asian women engineers out-earn both Africans and Europeans. [ 134 ] [ needs update ] Women are also under-represented in the sciences as compared to their numbers in the overall working population. Within 11% of African-American women in the workforce, 3% are employed as scientists and engineers. [ clarification needed ] Hispanics made up 8% of the total workers in the US, 3% of that number are scientists and engineers. Native Americans participation cannot be statistically measured. [ citation needed ] Women tend to earn less than men in almost all industries, including government and academia. [ citation needed ] Women are less likely to be hired in highest-paid positions. [ citation needed ] The data showing the differences in salaries, ranks, and overall success between the genders is often claimed [ who? ] to be a result of women's lack of professional experience. The rate of women's professional achievement is increasing. In 1996, the salaries for women in professional fields increased from 85% to 95% relative to men with similar skills and jobs. Young women between the age of 27 and 33 earned 98%, nearly as much as their male peers. [ needs update ] In the total workforce of the United States, women earn 74% as much as their male counterparts (in the 1970s they made 59% as much as their male counterparts). [ 110 ] : 33–37 [ needs update ] Claudia Goldin , Harvard concludes in A Grand Gender Convergence: Its Last Chapter – "The gender gap in pay would be considerably reduced and might vanish altogether if firms did not have an incentive to disproportionately reward individuals who labored long hours and worked particular hours." [ 135 ] Research on women's participation in the "hard" sciences such as physics and computer science speaks of the "leaky pipeline" model, in which the proportion of women "on track" to potentially becoming top scientists fall off at every step of the way, from getting interested in science and maths in elementary school, through doctorate, postdoctoral, and career steps. The leaky pipeline also applies in other fields. In biology , for instance, women in the United States have been getting Masters degrees in the same numbers as men for two decades, yet fewer women get PhDs ; and the numbers of women principal investigators have not risen. [ 136 ] What may be the cause of this "leaky pipeline" of women in the sciences? [ tone ] It is important to look at factors outside of academia that are occurring in women's lives at the same time they are pursuing their continued education and career search. The most outstanding factor that is occurring at this crucial time is family formation. As women are continuing their academic careers, they are also stepping into their new role as a wife and mother. These traditionally require at large time commitment and presence outside work. These new commitments do not fare well for the person looking to attain tenure. That is why women entering the family formation period of their life are 35% less likely to pursue tenure positions after receiving their PhD's than their male counterparts. [ 137 ] In the UK, women occupied over half the places in science-related higher education courses (science, medicine, maths, computer science and engineering) in 2004–05. [ 138 ] However, gender differences varied from subject to subject: women substantially outnumbered men in biology and medicine , especially nursing, while men predominated in maths, physical sciences, computer science and engineering. In the US, women with science or engineering doctoral degrees were predominantly employed in the education sector in 2001, with substantially fewer employed in business or industry than men. [ 139 ] According to salary figures reported in 1991, women earn anywhere between 83.6 percent to 87.5 percent that of a man's salary. [ needs update ] An even greater disparity between men and women is the ongoing trend that women scientists with more experience are not as well-compensated as their male counterparts. The salary of a male engineer continues to experience growth as he gains experience whereas the female engineer sees her salary reach a plateau. [ 140 ] Women, in the United States and many European countries, who succeed in science tend to be graduates of single-sex schools. [ 110 ] : Chapter 3 [ needs update ] Women earn 54% of all bachelor's degrees in the United States and 50% of those are in science. 9% of US physicists are women. [ 110 ] : Chapter 2 [ needs update ] In 2013, women accounted for 53% of the world's graduates at the bachelor's and master's level and 43% of successful PhD candidates but just 28% of researchers. Women graduates are consistently highly represented in the life sciences, often at over 50%. However, their representation in the other fields is inconsistent. In North America and much of Europe, few women graduate in physics, mathematics and computer science but, in other regions, the proportion of women may be close to parity in physics or mathematics. In engineering and computer sciences, women consistently trail men, a situation that is particularly acute in many high-income countries. [ 141 ] As of 2015, each step up the ladder of the scientific research system saw a drop in female participation until, at the highest echelons of scientific research and decision-making, there were very few women left. In 2015, the EU Commissioner for Research, Science and Innovation Carlos Moedas called attention to this phenomenon, adding that the majority of entrepreneurs in science and engineering tended to be men. In 2013, the German government coalition agreement introduced a 30% quota for women on company boards of directors. [ 141 ] In 2010, women made up 14% of university chancellors and vice-chancellors at Brazilian public universities and 17% of those in South Africa in 2011. [ 142 ] [ 143 ] As of 2015, in Argentina, women made up 16% of directors and vice-directors of national research centres and, in Mexico, 10% of directors of scientific research institutes at the National Autonomous University of Mexico. [ 144 ] [ 145 ] In the US, numbers are slightly higher at 23%. In the EU, less than 16% of tertiary institutions were headed by a woman in 2010 and just 10% of universities. In 2011, at the main tertiary institution for the English-speaking Caribbean, the University of the West Indies, women represented 51% of lecturers but only 32% of senior lecturers and 26% of full professors . A 2018 review of the Royal Society of Britain by historians Aileen Fyfe and Camilla Mørk Røstvik produced similarly low numbers, [ 146 ] with women accounting for more than 25% of members in only a handful of countries, including Cuba, Panama and South Africa. As of 2015, the figure for Indonesia was 17%. [ 141 ] [ 147 ] [ 148 ] In life sciences, women researchers have achieved parity (45–55% of researchers) in many countries. In some, the balance even now tips in their favour. Six out of ten researchers are women in both medical and agricultural sciences in Belarus and New Zealand, for instance. More than two-thirds of researchers in medical sciences are women in El Salvador, Estonia, Kazakhstan, Latvia, the Philippines, Tajikistan, Ukraine and Venezuela. [ 141 ] There has been a steady increase in female graduates in agricultural sciences since the turn of the century. In sub-Saharan Africa, for instance, numbers of female graduates in agricultural science have been increasing steadily, with eight countries reporting a share of women graduates of 40% or more: Lesotho, Madagascar, Mozambique, Namibia, Sierra Leone, South Africa, Swaziland and Zimbabwe. The reasons for this surge are unclear, although one explanation may lie in the growing emphasis on national food security and the food industry. Another possible explanation is that women are highly represented in biotechnology. For example, in South Africa, women were underrepresented in engineering (16%) in 2004 and in 'natural scientific professions' (16%) in 2006 but made up 52% of employees working in biotechnology-related companies. [ 141 ] Women play an increasing role in environmental sciences and conservation biology. In fact, women played a foremost role in the development of these disciplines. Silent Spring by Rachel Carson proved an important impetus to the conservation movement and the later banning of chemical pesticides. Women played an important role in conservation biology including the famous work of Dian Fossey, who published the famous Gorillas in the Mist and Jane Goodall who studied primates in East Africa. Today women make up an increasing proportion of roles in the active conservation sector. A recent survey of those working in the Wildlife Trusts in the U.K., the leading conservation organisation in England, found that there are nearly as many women as men in practical conservation roles. [ 149 ] Women are consistently underrepresented in engineering and related fields. In Israel, for instance, where 28% of senior academic staff are women, there are proportionately many fewer in engineering (14%), physical sciences (11%), mathematics and computer sciences (10%) but dominate education (52%) and paramedical occupations (63%). In Japan and the Republic of Korea, women represent just 5% and 10% of engineers. [ 141 ] For women who are pursuing STEM major careers, these individuals often face gender disparities in the work field, especially in regards to science and engineering. It has become more common for women to pursue undergraduate degrees in science, but are continuously discredited in salary rates and higher ranking positions. For example, men show a greater likelihood of being selected for an employment position than a woman. [ 150 ] In Europe and North America, the number of female graduates in engineering, physics, mathematics and computer science is generally low. Women make up just 19% of engineers in Canada, Germany and the US and 22% in Finland, for example. However, 50% of engineering graduates are women in Cyprus, 38% in Denmark and 36% in the Russian Federation, for instance. [ 141 ] In many cases, engineering has lost ground to other sciences, including agriculture. The case of New Zealand is fairly typical. Here, women jumped from representing 39% to 70% of agricultural graduates between 2000 and 2012, continued to dominate health (80–78%) but ceded ground in science (43–39%) and engineering (33–27%). [ 141 ] In a number of developing countries, there is a sizable proportion of women engineers. At least three out of ten engineers are women, for instance, in Costa Rica, Vietnam and the United Arab Emirates (31%), Algeria (32%), Mozambique (34%), Tunisia (41%) and Brunei Darussalam (42%). In Malaysia (50%) and Oman (53%), women are on a par with men. Of the 13 sub-Saharan countries reporting data, seven have observed substantial increases (more than 5%) in women engineers since 2000, namely: Benin, Burundi, Eritrea, Ethiopia, Madagascar, Mozambique and Namibia. [ 141 ] Of the seven Arab countries reporting data, four observe a steady percentage or an increase in female engineers (Morocco, Oman, Palestine and Saudi Arabia). In the United Arab Emirates, the government has made it a priority to develop a knowledge economy, having recognized the need for a strong human resource base in science, technology and engineering. With just 1% of the labour force being Emirati, it is also concerned about the low percentage of Emirati citizens employed in key industries. As a result, it has introduced policies promoting the training and employment of Emirati citizens, as well as a greater participation of Emirati women in the labour force. Emirati female engineering students have said that they are attracted to a career in engineering for reasons of financial independence, the high social status associated with this field, the opportunity to engage in creative and challenging projects and the wide range of career opportunities. [ 141 ] An analysis of computer science shows a steady decrease in female graduates since 2000 that is particularly marked in high-income countries. Between 2000 and 2012, the share of women graduates in computer science slipped in Australia, New Zealand, the Republic of Korea and US. In Latin America and the Caribbean, the share of women graduates in computer science dropped by between 2 and 13 percentage points over this period for all countries reporting data. [ 141 ] There are exceptions. In Denmark, the proportion of female graduates in computer science increased from 15% to 24% between 2000 and 2012 and Germany saw an increase from 10% to 17%. These are still very low levels. Figures are higher in many emerging economies. In Turkey, for instance, the proportion of women graduating in computer science rose from a relatively high 29% to 33% between 2000 and 2012. [ 141 ] The Malaysian information technology (IT) sector is made up equally of women and men, with large numbers of women employed as university professors and in the private sector. This is a product of two historical trends: the predominance of women in the Malay electronics industry, the precursor to the IT industry, and the national push to achieve a 'pan-Malayan' culture beyond the three ethnic groups of Indian, Chinese and Malay. Government support for the education of all three groups is available on a quota basis and, since few Malay men are interested in IT, this leaves more room for women. Additionally, families tend to be supportive of their daughters' entry into this prestigious and highly remunerated industry, in the interests of upward social mobility. Malaysia's push to develop an endogenous research culture should deepen this trend. [ 141 ] In India, the substantial increase in women undergraduates in engineering may be indicative of a change in the 'masculine' perception of engineering in the country. It is also a product of interest on the part of parents, since their daughters will be assured of employment as the field expands, as well as an advantageous marriage. Other factors include the 'friendly' image of engineering in India and the easy access to engineering education resulting from the increase in the number of women's engineering colleges over the last two decades. [ 141 ] While women have made huge strides in the STEM fields, it is obvious that they are still underrepresented. One of the areas where women are most underrepresented in science is space flight. Out of the 556 people who have traveled to space, only 65 of them were women. This means that only 11% of astronauts have been women. [ 151 ] In the 1960s, the American space program was taking off. However, women were not allowed to be considered for the space program because at the time astronauts were required to be military pilots – a profession that women were not allowed to be a part of. There were other "practical" reasons as well. According to General Don Flickinger of the United States Air Force, there was difficulty "designing and fitting a space suit to accommodate their particular biological needs and functions." [ 152 ] During the early 1960s, the first American astronauts, nicknamed the Mercury Seven , were training. At the same time, William Randolph Lovelace II was interested to see if women could manage to go through the same training that the Mercury 7 undergoing at the time. Lovelace recruited thirteen female pilots, called the " Mercury 13 ", and put them through the same tests that the male astronauts took. As a result, the women actually performed better on these tests than the men of the Mercury 7 did. However, this did not convince NASA officials to allow women in space. [ 151 ] In response, congressional hearings were held to investigate discrimination against women in the program. One of the women who testified at the hearing was Jerrie Cobb , the first woman to pass Lovelace's tests. [ 153 ] During her testimony, Cobb said: [ 151 ] I find it a little ridiculous when I read in a newspaper that there is a place called Chimp College in New Mexico where they are training chimpanzees for space flight , one a female named Glenda. I think it would be at least as important to let the women undergo this training for space flight. NASA officials also had representatives present, notably astronauts John Glenn and Scott Carpenter , to testify that women are not suited for the space program. Ultimately, no action came from the hearings, and NASA did not put a woman in space until 1983. [ 153 ] Even though the United States did not allow women in space during the 60s or 70s, other countries did. Valentina Tereshkova , a cosmonaut from the Soviet Union, was the first woman to fly in space. Although she had no piloting experience, she flew on the Vostok 6 in 1963. Before going to space, Tereshkova was a textile worker. Although she successfully orbited the Earth 48 times, the next woman to go to space did not fly until almost twenty years later. [ 154 ] Sally Ride was the third woman to go to space and the first American woman in space. In 1978, Ride and five other women were accepted into the first class of astronauts that allowed women. In 1983, Ride became the first American woman in space when she flew on the Challenger for the STS-7 mission. [ 154 ] NASA has been more inclusive in recent years. The number of women in NASA's astronaut classes has steadily risen since the first class that allowed women in 1978. The most recent class was 45% women, and the class before was 50%. In 2019, the first all-female spacewalk was completed at the International Space Station . [ 155 ] The global figures mask wide disparities from one region to another. In Southeast Europe, for instance, women researchers have obtained parity and, at 44%, are on the verge of doing so in Central Asia and Latin America and the Caribbean. In the European Union, on the other hand, just one in three (33%) researchers is a woman, compared to 37% in the Arab world. Women are also better represented in sub-Saharan Africa (30%) than in South Asia (17%). [ 141 ] There are also wide intraregional disparities. Women make up 52% of researchers in the Philippines and Thailand, for instance, and are close to parity in Malaysia and Vietnam, yet only one in three researchers is a woman in Indonesia and Singapore. In Japan and the Republic of Korea, two countries characterized by high researcher densities and technological sophistication, as few as 15% and 18% of researchers respectively are women. These are the lowest ratios among members of the Organisation for Economic Co-operation and Development . The Republic of Korea also has the widest gap among OECD members in remuneration between men and women researchers (39%). There is also a yawning gap in Japan (29%). [ 141 ] Latin America has some of the world's highest rates of women studying scientific fields; it also shares with the Caribbean one of the highest proportions of female researchers: 44%. Of the 12 countries reporting data for the years 2010–2013, seven have achieved gender parity, or even dominate research: Bolivia (63%), Venezuela (56%), Argentina (53%), Paraguay (52%), Uruguay (49%), Brazil (48%) and Guatemala (45%). Costa Rica is on the cusp (43%). Chile has the lowest score among countries for which there are recent data (31%). The Caribbean paints a similar picture, with Cuba having achieved gender parity (47%) and Trinidad and Tobago on 44%. Recent data on women's participation in industrial research are available for those countries with the most developed national innovation systems, with the exception of Brazil and Cuba: Uruguay (47%), Argentina (29%), Colombia and Chile (26%). [ 141 ] As in most other regions, the great majority of health graduates are women (60–85%). Women are also strongly represented in science. More than 40% of science graduates are women in each of Argentina, Colombia, Ecuador, El Salvador, Mexico, Panama and Uruguay. The Caribbean paints a similar picture, with women graduates in science being on a par with men or dominating this field in Barbados, Cuba, Dominican Republic and Trinidad and Tobago. [ 141 ] In engineering, women make up over 30% of the graduate population in seven Latin American countries (Argentina, Colombia, Costa Rica, Honduras, Panama and Uruguay) and one Caribbean country, the Dominican Republic. There has been a decrease in the number of women engineering graduates in Argentina, Chile and Honduras. [ 141 ] The participation of women in science has consistently dropped since the turn of the century. This trend has been observed in all sectors of the larger economies: Argentina, Brazil, Chile and Colombia. Mexico is a notable exception, having recorded a slight increase. Some of the decrease may be attributed to women transferring to agricultural sciences in these countries. Another negative trend is the drop in female doctoral students and in the labour force. Of those countries reporting data, the majority signal a significant drop of 10–20 percentage points in the transition from master's to doctoral graduates. [ 141 ] A study at UNICAMP (2019-2023) reveals female underrepresentation in publications, particularly in STEM fields and first/last authorship positions. [ 156 ] While UNICAMP's 42% female participation is comparable to USP's historical average (38.28%), [ 157 ] both fall below the Brazilian average (49%), [ 158 ] contrasting with higher female representation in science in some Latin American countries (UNESCO). Despite gender equity policies, female participation at UNICAMP declined after 2021, potentially due to the pandemic, and Field-Weighted Citation Impact (FWCI) was lower in areas like Social Sciences and Life Sciences, highlighting the need for stronger gender equality policies in science. Most countries in Eastern Europe, West and Central Asia have attained gender parity in research ( Armenia , Azerbaijan , Georgia , Kazakhstan , Mongolia and Ukraine ) or are on the brink of doing so ( Kyrgyzstan and Uzbekistan ). This trend is reflected in tertiary education, with some exceptions in engineering and computer science. Although Belarus and the Russian Federation have seen a drop over the past decade, women still represented 41% of researchers in 2013. In the former Soviet states, women are also very present in the business enterprise sector: Bosnia and Herzegovina (59%), Azerbaijan (57%), Kazakhstan (50%), Mongolia (48%), Latvia (48%), Serbia (46%), Croatia and Bulgaria (43%), Ukraine and Uzbekistan (40%), Romania and Montenegro (38%), Belarus (37%), Russian Federation (37%). [ 141 ] One in three researchers is a woman in Turkey (36%) and Tajikistan (34%). Participation rates are lower in Iran (26%) and Israel (21%), although Israeli women represent 28% of senior academic staff. At university, Israeli women dominate medical sciences (63%) but only a minority study engineering (14%), physical sciences (11%), mathematics and computer science (10%). There has been an interesting evolution in Iran. Whereas the share of female PhD graduates in health remained stable at 38–39% between 2007 and 2012, it rose in all three other broad fields. Most spectacular was the leap in female PhD graduates in agricultural sciences from 4% to 33% but there was also a marked progression in science (from 28% to 39%) and engineering (from 8% to 16%). [ 141 ] With the exception of Greece, all the countries of Southeast Europe were once part of the Soviet bloc. Some 49% of researchers in these countries are women (compared to 37% in Greece in 2011). This high proportion is considered a legacy of the consistent investment in education by the Socialist governments in place until the early 1990s, including that of the former Yugoslavia. Moreover, the participation of female researchers is holding steady or increasing in much of the region, with representation broadly even across the four sectors of government, business, higher education and non-profit. In most countries, women tend to be on a par with men among tertiary graduates in science. Between 70% and 85% of graduates are women in health, less than 40% in agriculture and between 20% and 30% in engineering. Albania has seen a considerable increase in the share of its women graduates in engineering and agriculture. [ 141 ] Women make up 33% of researchers overall in the European Union (EU), slightly more than their representation in science (32%). Women constitute 40% of researchers in higher education, 40% in government and 19% in the private sector, with the number of female researchers increasing faster than that of male researchers. The proportion of female researchers has been increasing over the last decade, at a faster rate than men (5.1% annually over 2002–2009 compared with 3.3% for men), which is also true for their participation among scientists and engineers (up 5.4% annually between 2002 and 2010, compared with 3.1% for men). [ 141 ] Despite these gains, women's academic careers in Europe remain characterized by strong vertical and horizontal segregation. In 2010, although female students (55%) and graduates (59%) outnumbered male students, men outnumbered women at the PhD and graduate levels (albeit by a small margin). Further along in the research career, women represented 44% of grade C academic staff, 37% of grade B academic staff and 20% of grade A academic staff.11 These trends are intensified in science, with women making up 31% of the student population at the tertiary level to 38% of PhD students and 35% of PhD graduates. At the faculty level, they make up 32% of academic grade C personnel, 23% of grade B and 11% of grade A. The proportion of women among full professors is lowest in engineering and technology, at 7.9%. With respect to representation in science decision-making, in 2010 15.5% of higher education institutions were headed by women and 10% of universities had a female rector. [ 141 ] Membership on science boards remained predominantly male as well, with women making up 36% of board members. The EU has engaged in a major effort to integrate female researchers and gender research into its research and innovation strategy since the mid-2000s. Increases in women's representation in all of the scientific fields overall indicates that this effort has met with some success; however, the continued lack of representation of women at the top level of faculties, management and science decision making indicate that more work needs to be done. The EU is addressing this through a gender equality strategy and crosscutting mandate in Horizon 2020 , its research and innovation funding programme for 2014–2020. [ 141 ] In 2013, women made up the majority of PhD graduates in fields related to health in Australia (63%), New Zealand (58%) and the United States of America (73%). The same can be said of agriculture, in New Zealand's case (73%). Women have also achieved parity in agriculture in Australia (50%) and the United States (44%). Just one in five women graduate in engineering in the latter two countries, a situation that has not changed over the past decade. In New Zealand, women jumped from constituting 39% to 70% of agricultural graduates (all levels) between 2000 and 2012 but ceded ground in science (43–39%), engineering (33–27%) and health (80–78%). As for Canada, it has not reported sex-disaggregated data for women graduates in science and engineering in recent years. Moreover, none of the four countries mentioned here have reported recent data on the share of female researchers. [ 141 ] South Asia is the region where women make up the smallest proportion of researchers: 17%. This is 13 percentage points below sub-Saharan Africa. Of those countries in South Asia reporting data for 2009–2013, Nepal has the lowest representation of all (in head counts), at 8% (2010), a substantial drop from 15% in 2002. In 2013, only 14% of researchers (in full-time equivalents) were women in the region's most populous country, India, down slightly from 15% in 2009. The percentage of female researchers is highest in Sri Lanka (39%), followed by Pakistan: 24% in 2009, 31% in 2013. There are no recent data available for Afghanistan or Bangladesh. [ 141 ] Women are most present in the private non-profit sector – they make up 60% of employees in Sri Lanka – followed by the academic sector: 30% of Pakistani and 42% of Sri Lankan female researchers. Women tend to be less present in the government sector and least likely to be employed in the business sector, accounting for 23% of employees in Sri Lanka, 11% in India and just 5% in Nepal. Women have achieved parity in science in both Sri Lanka and Bangladesh but are less likely to undertake research in engineering. They represent 17% of the research pool in Bangladesh and 29% in Sri Lanka. Many Sri Lankan women have followed the global trend of opting for a career in agricultural sciences (54%) and they have also achieved parity in health and welfare. In Bangladesh, just over 30% choose agricultural sciences and health, which goes against the global trend. Although Bangladesh still has progress to make, the share of women in each scientific field has increased steadily over the past decade. [ 141 ] Southeast Asia presents a different picture entirely, with women basically on a par with men in some countries: they make up 52% of researchers in the Philippines and Thailand, for example. Other countries are close to parity, such as Malaysia and Vietnam, whereas Indonesia and Singapore are still around the 30% mark. Cambodia trails its neighbours at 20%. Female researchers in the region are spread fairly equally across the sectors of participation, with the exception of the private sector, where they make up 30% or less of researchers in most countries. The proportion of women tertiary graduates reflects these trends, with high percentages of women in science in Brunei Darussalam, Malaysia, Myanmar and the Philippines (around 60%) and a low of 10% in Cambodia. Women make up the majority of graduates in health sciences, from 60% in Laos to 81% in Myanmar – Vietnam being an exception at 42%. Women graduates are on a par with men in agriculture but less present in engineering: Vietnam (31%), the Philippines (30%) and Malaysia (39%); here, the exception is Myanmar, at 65%. In the Republic of Korea, women make up about 40% of graduates in science and agriculture and 71% of graduates in health sciences but only 18% of female researchers overall. This represents a loss in the investment made in educating girls and women up through tertiary education, a result of traditional views of women's role in society and in the home. Kim and Moon (2011) remark on the tendency of Korean women to withdraw from the labour force to take care of children and assume family responsibilities, calling it a 'domestic brain drain'. [ 141 ] Women remain very much a minority in Japanese science (15% in 2013), although the situation has improved slightly (13% in 2008) since the government fixed a target in 2006 of raising the ratio of female researchers to 25%. Calculated on the basis of the current number of doctoral students, the government hopes to obtain a 20% share of women in science, 15% in engineering and 30% in agriculture and health by the end of the current Basic Plan for Science and Technology in 2016. In 2013, Japanese female researchers were most common in the public sector in health and agriculture, where they represented 29% of academics and 20% of government researchers. In the business sector, just 8% of researchers were women (in head counts), compared to 25% in the academic sector. In other public research institutions, women accounted for 16% of researchers. One of the main thrusts of Abenomics , Japan's current growth strategy, is to enhance the socio-economic role of women. Consequently, the selection criteria for most large university grants now take into account the proportion of women among teaching staff and researchers. [ 141 ] The low ratio of women researchers in Japan and the Republic of Korea, which both have some of the highest researcher densities in the world, brings down Southeast Asia's average to 22.5% for the share of women among researchers in the region. [ 141 ] At 37%, the share of female researchers in the Arab States compares well with other regions. The countries with the highest proportion of female researchers are Bahrain and Sudan at around 40%. Jordan, Libya, Oman, Palestine and Qatar have percentage shares in the low twenties. The country with the lowest participation of female researchers is Saudi Arabia, even though they make up the majority of tertiary graduates, but the figure of 1.4% covers only the King Abdulaziz City for Science and Technology. Female researchers in the region are primarily employed in government research institutes, with some countries also seeing a high participation of women in private nonprofit organizations and universities. [ 159 ] With the exception of Sudan (40%) and Palestine (35%), fewer than one in four researchers in the business enterprise sector is a woman; for half of the countries reporting data, there are barely any women at all employed in this sector. [ 141 ] Despite these variable numbers, the percentage of female tertiary-level graduates in science and engineering is very high across the region, which indicates there is a substantial drop between graduation and employment and research. Women make up half or more than half of science graduates in all but Sudan and over 45% in agriculture in eight out of the 15 countries reporting data, namely Algeria, Egypt, Jordan, Lebanon, Sudan, Syria, Tunisia and the United Arab Emirates. In engineering, women make up over 70% of graduates in Oman, with rates of 25–38% in the majority of the other countries, which is high in comparison to other regions. [ 141 ] The participation of women is somewhat lower in health than in other regions, possibly on account of cultural norms restricting interactions between males and females. Iraq and Oman have the lowest percentages (mid-30s), whereas Iran, Jordan, Kuwait, Palestine and Saudi Arabia are at gender parity in this field. The United Arab Emirates and Bahrain have the highest rates of all: 83% and 84%. [ 141 ] Once Arab women scientists and engineers graduate, they may come up against barriers to finding gainful employment. These include a misalignment between university programmes and labour market demand – a phenomenon which also affects men –, a lack of awareness about what a career in their chosen field entails, family bias against working in mixed-gender environments and a lack of female role models. [ 141 ] [ 160 ] One of the countries with the smallest female labour force is developing technical and vocational education for girls as part of a wider scheme to reduce dependence on foreign labour. By 2017, the Technical and Vocational Training Corporation of Saudi Arabia is to have constructed 50 technical colleges, 50 girls' higher technical institutes and 180 industrial secondary institutes. The plan is to create training placements for about 500 000 students, half of them girls. Boys and girls will be trained in vocational professions that include information technology, medical equipment handling, plumbing, electricity and mechanics. [ 141 ] Just under one in three (30%) researchers in sub-Saharan Africa is a woman. Much of sub-Saharan Africa is seeing solid gains in the share of women among tertiary graduates in scientific fields. In two of the top four countries for women's representation in science, women graduates are part of very small cohorts, however: they make up 54% of Lesotho 's 47 tertiary graduates in science and 60% of those in Namibia 's graduating class of 149. South Africa and Zimbabwe , which have larger graduate populations in science, have achieved parity, with 49% and 47% respectively. The next grouping clusters seven countries poised at around 35–40% ( Angola , Burundi , Eritrea , Liberia , Madagascar , Mozambique and Rwanda ). The rest are grouped around 30% or below ( Benin , Ethiopia , Ghana , Swaziland and Uganda ). Burkina Faso ranks lowest, with women making up 18% of its science graduates. [ 141 ] Female representation in engineering is fairly high in sub-Saharan Africa in comparison with other regions. In Mozambique and South Africa, for instance, women make up more than 34% and 28% of engineering graduates, respectively. Numbers of female graduates in agricultural science have been increasing steadily across the continent, with eight countries reporting the share of women graduates of 40% or more (Lesotho, Madagascar, Mozambique, Namibia, Sierra Leone , South Africa, Swaziland and Zimbabwe). In health, this rate ranges from 26% and 27% in Benin and Eritrea to 94% in Namibia. [ 141 ] Of note is that women account for a relatively high proportion of researchers employed in the business enterprise sector in South Africa (35%), Kenya (34%), Botswana and Namibia (33%) and Zambia (31%). Female participation in industrial research is lower in Uganda (21%), Ethiopia (15%) and Mali (12%). [ 141 ] Beginning in the twentieth century to present day, more and more women are becoming acknowledged for their work in science. However, women often find themselves at odds with expectations held towards them in relation to their scientific studies. [ 161 ] [ 162 ] For example, in 1968 James Watson questioned scientist Rosalind Franklin's place in the industry. He claimed that "the best place for a feminist was in another person's lab". [ 110 ] : 76–77 Women were and still are often critiqued of their overall presentation. [ 163 ] In Franklin's situation, she was seen as lacking femininity for she failed to wear lipstick or revealing clothing. [ 110 ] : 76–77 Since on average most of a woman's colleagues in science are men who do not see her as a true social peer, she will also find herself left out of opportunities to discuss possible research opportunities outside of the laboratory. In Londa Schiebinger 's book, Has Feminism Changed Science? , she mentions that men would have discussed their research outside of the lab, but this conversation is preceded by culturally "masculine" small-talk topics that, whether intentionally or not, excluded women influenced by their culture's feminine gender role from the conversation. [ 110 ] : 81–91 Consequently, this act of excluding many women from the after-hours work discussions produced a more separate work environment between the men and the women in science; as women then would converse with other women in science about their current findings and theories. Ultimately, the women's work was devalued as a male scientist was not involved in the overall research and analysis. According to Oxford University Press, the inequality toward women is "endorsed within cultures and entrenched within institutions [that] hold power to reproduce that inequality". [ 164 ] There are various gendered barriers in social networks that prevent women from working in male-dominated fields and top management jobs. Social networks are based on the cultural beliefs such as schemas and stereotypes. [ 164 ] According to social psychology studies, top management jobs are more likely to have incumbent schemas that favor "an achievement-oriented aggressiveness and emotional toughness that is distinctly male in character". [ 164 ] Gender stereotypes of feminine style assume women to be conforming and submissive to male culture creating a sense of unqualified women for top management jobs. In attempting to demonstrate competence and power, women can still be seen as unlikeable and untrustworthy, even if they excel at traditionally "masculine" tasks. [ 164 ] In addition, women's achievements are likely to be dismissed or discredited. [ 164 ] These "untrustworthy, dislikable women" could have very well been denied achievement from the fear men held of a woman overtaking his management position. Social networks and gender stereotypes produce many injustices that women have to experience in their workplace, as well as, the various obstacles they encounter when trying to advance in male-dominated and top management jobs. Women in professions like science, technology, and other related industries are likely to encounter these gendered barriers in their careers. [ 164 ] While there has been a push to encourage more women to participate in science, there is less outreach to lesbian, bi, or gender nonconforming women, and gender nonconforming people more broadly. [ 165 ] Due to the lack of data and statistics of LGBTQ members involvement in the STEM field, it is unknown to what exact degree lesbian and bisexual women, gender non-conformers (transgender, nonbinary/agender, or anti-gender gender abolitionists who eschew the system altogether) are potentially even more repressed and underrepresented than their straight peers. But a general lack of out lesbian and bi women in STEM has been noted. [ 165 ] [ 166 ] Reasons for under-representation of same-sex attracted women and anyone gender nonconforming in STEM fields include lack of role models in K–12 , [ 165 ] [ 166 ] [ 167 ] the desire of some transgender girls and women to adopt traditional heteronormative gender roles as gender is a cultural performance and socially-determined subjective internal experience, [ 168 ] [ 169 ] employment discrimination, and the possibility of sexual harassment in the workplace. Historically, women who have accepted STEM research positions for the government or the military remained in the closet due to lack of federal protections or the fact that homosexual or gender nonconforming expression was criminalized in their country. A notable example is Sally Ride , a physicist, the first American female astronaut, and a lesbian. [ 170 ] [ 171 ] Sally Ride chose not to reveal her sexuality until after her death in 2012; she purposefully revealed her sexual orientation in her obituary. [ 171 ] She has been known as the first female (and youngest) American to enter space, as well as, starting her own company, Sally Ride Science, that encourages young girls to enter the STEM field. She chose to keep her sexuality to herself because she was familiar with "the male-dominated" NASA's anti-homosexual policies at the time of her space travel. [ 171 ] Sally Ride's legacy continues as her company is still working to increase young girls and women's participation in the STEM fields. [ 172 ] In a nationwide study of LGBTQA employees in STEM fields in the United States, same-sex attracted and gender nonconforming women in engineering, earth sciences, and mathematics reported that they were less likely to be out in the workplace. [ 173 ] In general, LGBTQA people in this survey reported that, when more female or feminine gender role-identified people worked in their labs, the more accepting and safe the work environment. [ 173 ] In another study of over 30,000 LGBT employees in STEM-related federal agencies in the United States, queer women in these agencies reported feeling isolated in the workplace and having to work harder than their gender conforming male colleagues. This isolation and overachievement remained constant as they earned supervisory positions and worked their way up the ladder. [ 174 ] Gender nonconforming people in physics, particularly those identified as trans women in physics programs and labs, felt the most isolated and perceived the most hostility. [ 175 ] Organizations such as Lesbians Who Tech , Out to Innovate , Out in Science, Technology, Engineering and Mathematics (OSTEM), Pride in STEM , and House of STEM provide networking and mentoring opportunities for lesbian girls and women and LGBT people interested in or currently working in STEM fields. These organizations also advocate for the rights of lesbian and bi women and gender nonconformists in STEM in education and the workplace. [ 176 ] Margaret Rossiter , an American historian of science, offered three concepts to explain the reasons behind the data in statistics and how these reasons disadvantaged women in the science industry. The first concept is hierarchical segregation. [ 177 ] This is a well-known phenomenon in society, that the higher the level and rank of power and prestige, the smaller the population of females participating. The hierarchical differences point out that there are fewer women participating at higher levels of both academia and industry. Based on data collected in 1982, women earn 54 percent of all bachelor's degrees in the United States, with 50 percent of these in science. The source also indicated that this number increased almost every year. [ 178 ] As of 2020, women were earning 57.3 percent of all bachelor's degrees, with 38.6 percent of these in a STEM field. [ 179 ] The second concept included in Rossiter's explanation of women in science is territorial segregation . [ 110 ] : 34–35 The term refers to how female employment is often clustered in specific industries or categories in industries. Women stayed at home or took employment in feminine fields while men left the home to work. Although nearly half of the civilian work force is female, women still comprise the majority of low-paid jobs or jobs that society considered feminine. Statistics show that 60 percent of white professional women are nurses, daycare workers, or schoolteachers. [ 180 ] Researchers collected the data on many differences between women and men in science. Rossiter found that in 1966, thirty-eight percent of female scientists held master's degrees compared to twenty-six percent of male scientists; but large proportions of female scientists were in environmental and nonprofit organizations. [ 181 ] During the late 1960s and 1970s, equal-rights legislation made the number of female scientists rise dramatically. [ 182 ] The number of science degrees awarded to woman rose from seven percent in 1970 to twenty-four percent in 1985. In 1975 only 385 women received bachelor's degrees in engineering compared to 11,000 women in 1985. Elizabeth Finkel claims that even if the number of women participating in scientific fields increases, the opportunities are still limited. [ 183 ] Another researcher, Harriet Zuckerman , claims that when woman and man have similar abilities for a job, the probability of the woman getting the job is lower. [ 184 ] Finkel agrees, saying, "In general, while woman and men seem to be completing doctorate with similar credentials and experience, the opposition and rewards they find are not comparable. Women tend to be treated with less salary and status, many policy makers notice this phenomenon and try to rectify the unfair situation for women participating in scientific fields." [ 181 ] Despite women's tendency to perform better than men academically, there are flaws involving stereotyping, lack of information, and family influence that have been found to affect women's involvement in science. Stereotyping has an effect, because people associate characteristics such as nurturing, kind, and warm or characteristics like strong and powerful with a particular gender. These character associations lead people to stereotype that certain jobs are more suitable to a particular gender. [ 185 ] Lack of information is something that many institutions have worked hard over the years to improve by making programs such as the IFAC project [ 186 ] (Information for a choice: empowering women through learning for scientific and technological career paths) which investigated low women participation in science and technology fields at high school to university level. However, not all efforts were as successful, "Science: it's a girl thing" campaign, which has since been removed, received backlash for further encouraging women that they must partake in "girly" or "feminine" activities. [ 186 ] The idea being that if women are fully informed of their career choices and employability, they will be more inclined to pursue STEM field jobs. Women also struggle in the sense of lacking role models of women in science. [ 186 ] Family influence is dependent on education level, economic status, and belief system. [ 187 ] Education level of a student's parent matters, because oftentimes people who have higher education have a different opinion on education's importance than someone that does not. A parent can also be an influence in the sense that they want their children to follow in their footsteps and pursue a similar occupation, especially in women, it's been found that the mother's line of work tends to correlate with their daughters. [ 188 ] Economic status can influence what kind of higher education a student might get. Economic status may influence their education depending on whether they are a work bound student or a college bound student. A work bound student may choose a shorter career path to quickly begin making money or due to lack of time. The belief system of a household can also have a big impact on women depending on their family's religious or cultural viewpoints. There are still some countries that have certain regulations on women's occupation, clothing, and curfew that limit career choices for women. Parental influence is also relevant because people tend to want to fulfill what they could not have as a child. [ 187 ] Unfortunately, women are at such a disadvantage because not only must they overcome societal norms but then they also have to outperform men for the same recognition, studies show. [ 189 ] That sexism is alive and well in science is known. ... Even in the life sciences , where men and women start careers in fairly equal numbers, the number of women drops off rapidly at professorial level. On average, fewer than one in five science professors are female. Science punishes career breaks, and women who take time off to have children are immediately disadvantaged. "The flashpoint is when you’re about 35 and trying to get tenure. That can be when you’re trying to have kids, and it can play a major role in why you see so much attrition at that stage," said Jennifer Rohn , a cell biologist at University College London . A grant may give a woman a year’s grace if she has a baby, but it takes longer to get back into research projects than that. [ 190 ] A number of organizations have been set up to combat the stereotyping that may encourage girls away from careers in these areas. In the UK The WISE Campaign (Women into Science, Engineering and Construction) and the UKRC (The UK Resource Centre for Women in SET) are collaborating to ensure industry, academia and education are all aware of the importance of challenging the traditional approaches to careers advice and recruitment that mean some of the best brains in the country are lost to science. The UKRC and other women's networks provide female role models, resources and support for activities that promote science to girls and women. The Women's Engineering Society , a professional association in the UK, has been supporting women in engineering and science since 1919. In computing, the British Computer Society group BCSWomen is active in encouraging girls to consider computing careers, and in supporting women in the computing workforce. In the United States, the Association for Women in Science is one of the most prominent organization for professional women in science. In 2011, the Scientista Foundation was created to empower pre-professional college and graduate women in science, technology, engineering and mathematics (STEM), to stay in the career track. There are also several organizations focused on increasing mentorship from a younger age. One of the best known groups is Science Club for Girls , [ citation needed ] which pairs undergraduate mentors with high school and middle school mentees. The model of that pairs undergraduate college mentors with younger students is quite popular. In addition, many young women are creating programs to boost participation in STEM at a younger level, either through conferences or competitions. In efforts to make women scientists more visible to the general public, the Grolier Club in New York hosted a "landmark exhibition" titled "Extraordinary Women in Science & Medicine: Four Centuries of Achievement", showcasing the lives and works of 32 women scientists in 2003. [ 191 ] The National Institute for Occupational Safety and Health (NIOSH) developed a video series highlighting the stories of female researchers at NIOSH. [ 192 ] Each of the women featured in the videos share their journey into science, technology, engineering, or math (STEM), and offers encouragement to aspiring scientists. [ 192 ] NIOSH also partners with external organizations in efforts to introduce individuals to scientific disciplines and funds several science-based training programs across the country. [ 193 ] [ 194 ] Creative Resilience: Art by Women in Science is a multi–media exhibition and accompanying publication, produced in 2021 by the Gender Section of the United Nations Educational, Scientific and Cultural Organization ( UNESCO ). The project aims to give visibility to women, both professionals and university students, working in science, technology, engineering and mathematics ( STEM ). With short biographical information and graphic reproductions of their artworks dealing with the Covid-19 pandemic and accessible online, the project provides a platform for women scientists to express their experiences, insights, and creative responses to the pandemic. [ 195 ] In 2013, journalist Christie Aschwanden noted that a type of media coverage of women scientists that "treats its subject's sex as her most defining detail" was still prevalent. She proposed a checklist, the " Finkbeiner test ", [ 196 ] to help avoid this approach. [ 197 ] It was cited in the coverage of a much-criticized 2013 New York Times obituary of rocket scientist Yvonne Brill that began with the words: "She made a mean beef stroganoff". [ 198 ] Women are often poorly portrayed in film . [ 199 ] The misrepresentation of women scientists in film, television and books can influence children to engage in gender stereotyping. This was seen in a 2007 meta-analysis conducted by Jocelyn Steinke and colleagues from Western Michigan University where, after engaging elementary school students in a Draw-a-Scientist Test , out of 4,000 participants only 28 girls drew female scientists. [ 200 ] A study conducted at Lund University in 2010 and 2011 analysed the genders of invited contributors to News & Views in Nature and Perspectives in Science . It found that 3.8% of the Earth and environmental science contributions to News & Views were written by women even while the field was estimated to be 16–20% female in the United States. Nature responded by suggesting that, worldwide, a significantly lower number of Earth scientists were women, but nevertheless committed to address any disparity. [ 201 ] In 2012, a journal article published in Proceedings of the National Academy of Sciences (PNAS) reported a gender bias among science faculty. [ 202 ] Faculty were asked to review a resume from a hypothetical student and report how likely they would be to hire or mentor that student, as well as what they would offer as starting salary. Two resumes were distributed randomly to the faculty, only differing in the names at the top of the resume (John or Jennifer). The male student was rated as significantly more competent, more likely to be hired, and more likely to be mentored. The median starting salary offered to the male student was greater than $3,000 over the starting salary offered to the female student. Both male and female faculty exhibited this gender bias. This study suggests bias may partly explain the persistent deficit in the number of women at the highest levels of scientific fields. Another study reported that men are favored in some domains, such as biology tenure rates, but that the majority of domains were gender-fair; the authors interpreted this to suggest that the under-representation of women in the professorial ranks was not solely caused by sexist hiring, promotion, and remuneration. [ 203 ] In April 2015 Williams and Ceci published a set of five national experiments showing that hypothetical female applicants were favored by faculty for assistant professorships over identically qualified men by a ratio of 2 to 1. [ 204 ] In 2014, a controversy over the depiction of pinup women on Rosetta project scientist Matt Taylor's shirt during a press conference raised questions of sexism within the European Space Agency. [ 205 ] The shirt, which featured cartoon women with firearms, led to an outpouring of criticism and an apology after which Taylor "broke down in tears." [ 170 ] In 2015, stereotypes about women in science were directed at Fiona Ingleby, research fellow in evolution, behavior, and environment at the University of Sussex , and Megan Head, postdoctoral researcher at the Australian National University , when they submitted a paper analyzing the progression of PhD graduates to postdoctoral positions in the life sciences to the journal PLOS ONE . [ 206 ] The authors received an email on 27 March informing them that their paper had been rejected due to its poor quality. [ 206 ] The email included comments from an anonymous reviewer, which included the suggestion that male authors be added in order to improve the quality of the science and serve as a means of ensuring that incorrect interpretations of the data are not included. [ 206 ] Ingleby posted excerpts from the email on Twitter on 29 April bringing the incident to the attention of the public and media. [ 206 ] The editor was dismissed from the journal and the reviewer was removed from the list of potential reviewers. A spokesman from PLOS apologized to the authors and said they would be given the opportunity to have the paper reviewed again. [ 206 ] On 9 June 2015, Nobel prize winning biochemist Tim Hunt spoke at the World Conference of Science Journalists in Seoul . Prior to applauding the work of women scientists, he described emotional tension, saying "you fall in love with them, they fall in love with you, and when you criticise them they cry." [ 207 ] Initially, his remarks were widely condemned and he was forced to resign from his position at University College London . However, multiple conference attendees gave accounts, including a partial transcript and a partial recording, maintaining that his comments were understood to be satirical before being taken out of context by the media. [ 208 ] In 2016, an article published in JAMA Dermatology reported a significant and dramatic downward trend in the number of NIH-funded woman investigators in the field of dermatology and that the gender gap between male and female NIH-funded dermatology investigators was widening. The article concluded that this disparity was likely due to a lack of institutional support for women investigators. [ 209 ] In January 2005, Harvard University President Lawrence Summers sparked controversy at a National Bureau of Economic Research (NBER) Conference on Diversifying the Science & Engineering Workforce. Dr. Summers offered his explanation for the shortage of women in senior posts in science and engineering. He made comments suggesting the lower numbers of women in high-level science positions may in part be due to innate differences in abilities or preferences between men and women. Making references to the field and behavioral genetics, he noted the generally greater variability among men (compared to women) on tests of cognitive abilities, [ 210 ] [ 211 ] [ 212 ] leading to proportionally more men than women at both the lower and upper tails of the test score distributions. In his discussion of this, Summers said that "even small differences in the standard deviation [between genders] will translate into very large differences in the available pool substantially out [from the mean]". [ 213 ] Summers concluded his discussion by saying: [ 213 ] So my best guess, to provoke you, of what's behind all of this is that the largest phenomenon, by far, is the general clash between people's legitimate family desires and employers' current desire for high power and high intensity, that in the special case of science and engineering, there are issues of intrinsic aptitude, and particularly of the variability of aptitude, and that those considerations are reinforced by what are in fact lesser factors involving socialization and continuing discrimination. Despite his protégée, Sheryl Sandberg, defending Summers' actions and Summers offering his own apology repeatedly, the Harvard Graduate School of Arts and Sciences passed a motion of "lack of confidence" in the leadership of Summers who had allowed tenure offers to women plummet after taking office in 2001. [ 213 ] The year before he became president, Harvard extended 13 of its 36 tenure offers to women and by 2004 those numbers had dropped to 4 of 32 with several departments lacking even a single tenured female professor. [ 214 ] This controversy is speculated to have significantly contributed to Summers resignation from his position at Harvard the following year.
https://en.wikipedia.org/wiki/Women_in_science
The Womersley number ( α {\displaystyle \alpha } or Wo {\displaystyle {\text{Wo}}} ) is a dimensionless number in biofluid mechanics and biofluid dynamics . It is a dimensionless expression of the pulsatile flow frequency in relation to viscous effects . It is named after John R. Womersley (1907–1958) for his work with blood flow in arteries . [ 1 ] The Womersley number is important in keeping dynamic similarity when scaling an experiment. An example of this is scaling up the vascular system for experimental study. The Womersley number is also important in determining the thickness of the boundary layer to see if entrance effects can be ignored. The square root of this number is also referred to as Stokes number , Stk = Wo {\displaystyle {\text{Stk}}={\sqrt {\text{Wo}}}} , due to the pioneering work done by Sir George Stokes on the Stokes second problem . The Womersley number, usually denoted α {\displaystyle \alpha } , is defined by the relation α 2 = transient inertial force viscous force = ρ ω U μ U L − 2 = ω L 2 μ ρ − 1 = ω L 2 ν , {\displaystyle \alpha ^{2}={\frac {\text{transient inertial force}}{\text{viscous force}}}={\frac {\rho \omega U}{\mu UL^{-2}}}={\frac {\omega L^{2}}{\mu \rho ^{-1}}}={\frac {\omega L^{2}}{\nu }}\,,} where L {\displaystyle L} is an appropriate length scale (for example the radius of a pipe), ω {\displaystyle \omega } is the angular frequency of the oscillations, and ν {\displaystyle \nu } , ρ {\displaystyle \rho } , μ {\displaystyle \mu } are the kinematic viscosity , density, and dynamic viscosity of the fluid, respectively. [ 2 ] The Womersley number is normally written in the powerless form α = L ( ω ρ μ ) 1 2 . {\displaystyle \alpha =L\left({\frac {\omega \rho }{\mu }}\right)^{\frac {1}{2}}\,.} In the cardiovascular system, the pulsation frequency, density, and dynamic viscosity are constant, however the Characteristic length , which in the case of blood flow is the vessel diameter, changes by three orders of magnitudes (OoM) between the aorta and fine capillaries. The Womersley number thus changes due to the variations in vessel size across the vasculature system. The Womersley number of human blood flow can be estimated as follows: α = L ( ω ρ μ ) 1 2 . {\displaystyle \alpha =L\left({\frac {\omega \rho }{\mu }}\right)^{\frac {1}{2}}\,.} Below is a list of estimated Womersley numbers in different human blood vessels: It can also be written in terms of the dimensionless Reynolds number (Re) and Strouhal number (St): α = ( 2 π R e S t ) 1 / 2 . {\displaystyle \alpha =\left(2\pi \,\mathrm {Re} \,\mathrm {St} \right)^{1/2}\,.} The Womersley number arises in the solution of the linearized Navier–Stokes equations for oscillatory flow (presumed to be laminar and incompressible) in a tube. It expresses the ratio of the transient or oscillatory inertia force to the shear force. When α {\displaystyle \alpha } is small (1 or less), it means the frequency of pulsations is sufficiently low that a parabolic velocity profile has time to develop during each cycle, and the flow will be very nearly in phase with the pressure gradient, and will be given to a good approximation by Poiseuille's law , using the instantaneous pressure gradient. When α {\displaystyle \alpha } is large (10 or more), it means the frequency of pulsations is sufficiently large that the velocity profile is relatively flat or plug-like, and the mean flow lags the pressure gradient by about 90 degrees. Along with the Reynolds number, the Womersley number governs dynamic similarity. [ 3 ] The boundary layer thickness δ {\displaystyle \delta } that is associated with the transient acceleration is inversely related to the Womersley number. This can be seen by recognizing the Stokes number as the square root of the Womersley number. [ 4 ] δ = ( L / α ) = ( L W o ) , {\displaystyle \delta =\left(L/\alpha \right)=\left({\frac {L}{\sqrt {\mathrm {Wo} }}}\right),} where L {\displaystyle L} is a characteristic length. In a flow distribution network that progresses from a large tube to many small tubes (e.g. a blood vessel network), the frequency, density, and dynamic viscosity are (usually) the same throughout the network, but the tube radii change. Therefore, the Womersley number is large in large vessels and small in small vessels. As the vessel diameter decreases with each division the Womersley number soon becomes quite small. The Womersley numbers tend to 1 at the level of the terminal arteries. In the arterioles, capillaries, and venules the Womersley numbers are less than one. In these regions the inertia force becomes less important and the flow is determined by the balance of viscous stresses and the pressure gradient. This is called microcirculation . [ 4 ] Some typical values for the Womersley number in the cardiovascular system for a canine at a heart rate of 2 Hz are: [ 4 ] It has been argued that universal biological scaling laws (power-law relationships that describe variation of quantities such as metabolic rate, lifespan, length, etc., with body mass) are a consequence of the need for energy minimization, the fractal nature of vascular networks, and the crossover from high to low Womersley number flow as one progresses from large to small vessels. [ 5 ]
https://en.wikipedia.org/wiki/Womersley_number
WonderFest is an American fan convention focusing on science fiction and horror , held annually since 1992 after two years as a predecessor event. "One of the biggest hobby events in the country," [ 1 ] it takes place in Louisville, Kentucky , and is the site of the annual presentation of the Rondo Hatton Classic Horror Awards . WonderFest originated in 1990 as the hobbyist club The Scale Figure Modelers Society's Louisville Plastic Kit & Toy Show, held at a Ramada Inn hotel. [ 2 ] The club had been founded the year before by Irwin Severs and Larry Johnson. [ 3 ] In 1992, the convention changed its name to WonderFest, and was held at a larger venue. Four years later, it relocated to its current home, the hotel Crowne Plaza, formerly Executive West. The convention formally split off from the hobbyist club after the 1997 show. [ 2 ] [ 4 ] The edition originally scheduled for May 30–31, 2020, and then October 24–25, 200, was canceled because of the COVID-19 pandemic . [ 5 ] Guests throughout the years have included filmmakers / TV producers Joe Dante , D. C. Fontana , Nicholas Meyer , Greg Nicotero , and George A. Romero , genre-film actors and actresses Dirk Benedict , Martine Beswick , Veronica and Angela Cartwright , Joanna Cassidy , Yvonne Craig , Claudia Christian , Denise Crosby , Sybil Danning , Keir Dullea , Anne Francis , Marta Kristen , Gary Lockwood , Kevin McCarthy , Lee Meriwether , Caroline Munro , Robert Picardo , Linnea Quigley , and Brinke Stevens , special effects artists Ray Harryhausen , Tom Savini , and Chris Walas , comics and children's-book writers / artists Frank Cho , Basil Gogos , Joe Jusko , Michael Kaluta , Mark Schultz , William Stout , and Bernie Wrightson , and horror scions Sara Karloff [ 5 ] and Vanessa Harryhausen. [ 6 ] It also features cosplayers . [ 7 ] Events there include the annual presentation of the Rondo Hatton Classic Horror Awards [ 8 ] [ 9 ] and the WonderFest Model Contest, hosted by Amazing Figure Modeler magazine. [ 10 ] Charitable outreach has included raffles to benefit the Pediatric AIDS Foundation and the WHAS Crusade for Children . [ 11 ] In 2014, three Louisville, Kentucky-based podcasters attempted to set a Guinness World Record at WonderFest for the Longest Uninterrupted Webcast (now called now Longest Audio-Only Live Stream), Tower of Technobabble , to raise money for a local animal organization's spay and neuter program. [ 12 ] They set the then-record of 41 hours. [ 13 ] The CEO as of 2004 was Dave Hodge. [ 14 ] As of 2022, its CEO was Melina Angstrom. [ 15 ]
https://en.wikipedia.org/wiki/WonderFest
Wonder Loom is a toy loom designed for children, used mainly as a way for them to create colourful bracelets and charms by weaving rubber bands together into Brunnian links . [ citation needed ] It was designed in 2013 by Choon's Designs LLC of Wixom, Michigan [ 1 ] and licensed to The Beadery Craft Products in Hope Valley, Rhode Island as the exclusive manufacturer. The Wonder Loom is a plastic pegboard style loom measuring 2.25 inches (57 mm) by 11.75 inches (298 mm) with 3 rows of 13 pegs each. Colourful elastics are placed on the pins and then looped using a hooked picking tool. This produces strings of interconnected loops called Brunnian links, which depending on the pattern used, forms jewelry, headbands, keychains, action figures [ 1 ] or other shapes when removed. The Wonder Loom kit includes a loom, which features patented channels and flanges to make the looping easier, a picking tool, C-clip fasteners, and an assortment of 600 rubber bands. [ 2 ] The Wonder Loom was created by Cheong Choon Ng, who is also the creator of the Rainbow Loom , when Walmart and other retailers requested a made-in-the-USA version of the toy. [ 3 ] After reworking the loom to simplify the design, Ng licensed the Wonder Loom to The Beadery Craft Products as the exclusive US manufacturer. The product became available for purchase at Wal-Mart stores on 8 November 2013 and shipped 150,000 units per week through the 2013 holiday season and into 2014. [ 4 ] The Wonder Loom also inspired several instructional books to be published. These include: Since its release, new products have been added to the line, including new band colours, assortments, beads, and charms to attach to the bracelets. The Beadery also announced a smaller handheld version called the HandyLoom and other new tools and accessories being revealed later in 2014. [ 2 ]
https://en.wikipedia.org/wiki/Wonder_Loom
The Wong–Sandler mixing rule is a thermodynamic mixing rule used for vapor–liquid equilibrium and liquid-liquid equilibrium calculations. [ 1 ] The first boundary condition is which constrains the sum of a and b . The second equation is with the notable limit as P → ∞ {\displaystyle P\to \infty } (and V _ i → b , {\displaystyle {\underline {V}}_{i}\to b,} V _ m i x → b {\displaystyle {\underline {V}}_{mix}\to b} ) of The mixing rules become The cross term still must be specified by a combining rule, either or This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wong–Sandler_mixing_rule
Wonky hole is a colloquial, Australian term for a submarine groundwater discharge , a freshwater spring flowing from the seabed. Wonky holes are found in the Great Barrier Reef and the Gulf of Carpentaria , both in Queensland . Wonky holes can be found in the coral reef up to 60 km (37 mi) offshore. [ 1 ] [ 2 ] [ 3 ] Wonky holes are located along riverbeds which existed in the last glacial period ending about 11,000 years ago. At that time of the last glacial maximum much of northern Europe and North America was covered by ice sheets up to 3 km (1.9 mi) thick; the water tied up in the glacial ice lowered the sea level more than 120 m (390 ft). The sediment in the submerged river beds from that period has been covered with coral in many places. Since the sediment is more permeable than the surrounding materials, it channels fresh water to thin spots in the coral, creating the fresh water springs called wonky holes. The nutrients carried by the fresh water attract fish and fishermen. Coral does not grow well in the fresh water, resulting in irregular growth around wonky holes. The rough bottoms around the outlet tend to capture fishing nets. Around 200 holes are known along the coast between Townsville and Cape York. Water flowing along the submarine riverbeds and exiting at wonky holes can be charged with nutrients carried from the mainland. These can cause eutrophication in the Great Barrier Reef lagoon. [ 4 ] A wonky hole makes a brief appearance in the science fiction novel, Camouflage . [ 5 ]
https://en.wikipedia.org/wiki/Wonky_hole
The WooYun ( Chinese : 乌云网 ; lit. 'dark cloud') [ 2 ] was a Mainland China -based vulnerability disclosure platform [ 3 ] founded in May 2010 [ 4 ] by Fang Xiaodun [ 5 ] and Meng De . [ 6 ] It posted an announcement on July 20, 2016 that the site was down for an upgrade and would be restored in the shortest possible time. [ 7 ] However, as of April 12, 2021, the website remains inaccessible. [ 8 ] WooYun touted itself as a "free and equal platform for reporting vulnerabilities". [ 9 ] The Wooyun.org domain name was registered on May 6, 2010. [ 10 ] A white hat by the name of Yuan Wei ("YW") submitted an SQL vulnerability to Jiayuan.com in December 2015. Jiayuan fixed the issue and publicly thanked YW, but reported him for alleged theft of more than 900 rows of personal information in January 2016. The suspect was taken into custody in April while maintaining his innocence, explaining the access as caused by the sqlmap program. [ 11 ] On the evening of July 19, 2016, someone broke the news that all the senior managements of WooYun were taken away by the police. [ 12 ] The Wall Street Journal said it was unclear whether the Chinese government shut it down or its organizers did. [ 13 ] iThome.com.tw speculated that the most likely reason for the shutdown of WooYun was that hackers on the platform exposed a vulnerability in the system of China's United Front Work Department , which had leaked Chinese state secrets and stepped on the bottom line of the Chinese government. [ 14 ]
https://en.wikipedia.org/wiki/WooYun
Wood's metal , also known as Lipowitz's alloy or by the commercial names Cerrobend , Bendalloy , Pewtalloy and MCP 158 , is a fusible metal alloy (having a low melting point) that is useful for soldering and making custom metal parts. The alloy is named for Barnabas Wood , who invented and patented the alloy in 1860. [ 1 ] [ 2 ] It is a eutectic alloy of 50% bismuth , 26.7% lead , 13.3% tin , and 10% cadmium by mass. It has a melting point of approximately 70 °C (158 °F). [ 3 ] [ 4 ] Its fumes are toxic, as well as being toxic on skin exposure. Uses include making custom-shaped apertures and blocks (for example, electron-beam cutouts and lung blocks) for medical radiation treatment, and making casts of keys that are hard to otherwise duplicate. [ 5 ] [ 6 ] Like other fusible alloys, e.g. Rose's metal , Wood's metal can be used as a heat-transfer medium in hot baths. Hot baths with Rose's and Wood's metals are not used routinely but are employed at temperatures above 220 °C (428 °F). [ 7 ] At room temperature, Wood's metal has a modulus of elasticity of 12.7 GPa and a yield strength of 26.2 MPa. [ 8 ] Wood's metal is toxic because it contains lead and cadmium, and contamination of bare skin is considered harmful. Vapour from cadmium-containing alloys is also known to pose a danger to humans. [ 9 ] Cadmium poisoning carries the risk [ 10 ] of cancer , anosmia (loss of sense of smell), and damage to the liver, kidneys, nerves, bones, and respiratory system. Field's metal is a non-toxic alternative. The dust may form flammable mixtures with air. [ 9 ]
https://en.wikipedia.org/wiki/Wood's_metal
The Wood Screw Pump is a low-lift axial-flow drainage pump designed by A. Baldwin Wood in 1913 to cope with the drainage problems of New Orleans . Wood's extremely efficient pumps replaced less efficient pumps in the city's drainage system, prior to which the city had experienced chronic flooding problems, bringing diseases such as malaria and yellow fever along with contamination of drinking water supplies. The pumps are driven by 2,000 horsepower (1,500 kW) synchronous Allis-Chalmers and General Electric motors, built in the early 1900s. They were designed to lift a large volume of water into outfall canals that flowed into Lake Pontchartrain . [ 1 ] Having proved their operational efficiency in New Orleans, officials around the world wanted Wood to make pumps for them, especially those in the Netherlands . Wood rejected all requests as he refused to leave Louisiana . Until the arrival of Hurricane Katrina , the pumps had kept much of New Orleans from experiencing major inundation for nearly 100 years. [ 2 ] [ 3 ] These pumps played a crucial role in protecting New Orleans from flooding for almost a century.
https://en.wikipedia.org/wiki/Wood_Screw_Pump
Wood ash is the powdery residue remaining after the combustion of wood , such as burning wood in a fireplace , bonfire , or an industrial power plant . It is largely composed of calcium compounds, along with other non-combustible trace elements present in the wood, and has been used for many purposes throughout history. A comprehensive set of analyses of wood ash composition from many tree species has been carried out by Emil Wolff, [ 1 ] among others. Several factors have a major impact on the composition: [ 2 ] The burning of wood results in about 6–10% ashes on average. [ 2 ] The residue ash of 0.43 and 1.82 percent of the original mass of burned wood (assuming dry basis , meaning that H 2 O is driven off) is produced for certain woods if it is pyrolized until all volatiles disappear and it is burned at 350 °C (662 °F) for 8 hours. [ a ] Also the conditions of the combustion affect the composition and amount of the residue ash, thus higher temperature will reduce the ash yield. [ 4 ] Typically, wood ash contains the following major elements: [ 2 ] [ clarification needed ] [ 5 ] As the wood burns, it produces different compounds depending on the temperature used. Some studies cite calcium carbonate ( CaCO 3 ) as the major constituent, others find no carbonate at all but calcium oxide ( CaO ) instead. The latter is produced at higher temperatures (see calcination ). [ 3 ] The equilibrium reaction CaCO 3 → CO 2 + CaO has its equilibrium shifted leftward at 750 °C (1,380 °F) and high CO 2 partial pressure (such as in a wood fire) but shifted rightward at 900 °C (1,650 °F) or when CO 2 partial pressure is reduced. [ 6 ] Much of wood ash contains calcium carbonate (CaCO 3 ) as its major component, representing 25% [ 7 ] or even 45% of total ash weight. [ 8 ] At 600 °C (1,112 °F) CaCO 3 and K 2 CO 3 were identified in one case. [ b ] Less than 10% is potash , and less than 1% is phosphate . [ 7 ] There are trace elements of iron (Fe), manganese (Mn), zinc (Zn), copper (Cu) and some heavy metals . [ 7 ] Their concentrations in ash vary due to combustion temperature. [ 3 ] Decomposition of carbonates and the volatilization of potassium (K), sulfur (S), and trace amounts of copper (Cu) and boron (B) may result from increased temperature. [ 3 ] The study has found that at raised temperature K, S, B, sodium (Na) and copper (Cu) decreased, whereas Mg, P, Mn, Al, Fe, and Si did not change relative to calcium (Ca). All of these trace elements are, however, present in the form of oxides at higher temperature of combustion. [ 3 ] Some elements in wood ash (all fractions given in mass of elements per mass of ash) include: [ 2 ] : 304 One study has determined that a slowly burning wood (100–200 °C (212–392 °F) ) emissions typically include 16 alkenes , 5 alkadienes , 5 alkynes and several alkanes and arenes in proportions. [ c ] [ 9 ] Ethene , acetylene and benzene were a major part at efficient combustion. [ 9 ] Proportion of C 3 -C 7 alkenes were found to be higher for smouldering. [ 9 ] Benzene and 1,3-butadiene constituted ~10–20% and ~1–2% by mass of total non-methane hydrocarbons. [ 9 ] Wood ash can be used as a fertilizer used to enrich agricultural soil nutrition . In this role, wood ash serves as a source of potassium and calcium carbonate , the latter acting as a liming agent to neutralize acidic soils . [ 7 ] Wood ash can also be used as an amendment for organic hydroponic solutions , generally replacing inorganic compounds containing calcium, potassium, magnesium and phosphorus. [ 10 ] Wood ash is commonly disposed of in landfills , but with rising disposal costs, ecologically friendly alternatives, such as serving as compost for agricultural and forestry applications, are becoming more popular. [ 11 ] Because wood ash has a high char content, it can be used as an odor control agent, especially in composting operations. [ 12 ] Wood ash has a very long history of being used in ceramic glazes , particularly in the Chinese, Japanese and Korean traditions, though now used by many craft potters. It acts as a flux , reducing the melting point of the glaze. [ 13 ] For thousands of years, plant or wood ash was leached with water, to yield an impure solution of potassium carbonate . This product could be mixed with oils or fats to produce a soft " soap " or soap like-product, as was done in ancient Sumeria , Europe , and Egypt . [ 14 ] However only certain types of plants could produce a soap that actually lathered. [ 15 ] Later, medieval European soapmakers treated the wood ash solution with slaked lime , which contains calcium hydroxide , to get a hydroxide-rich solution for soapmaking. [ 16 ] However it was not until the invention of the Leblanc process that high quality sodium hydroxide could be mass produced, rendering obsolete the earlier forms of soap using crude wood or plant ash. [ 17 ] This was a revolutionary discovery that facilitated the modern soapmaking industry. [ 18 ] The ectomycorrhizal fungi Suillus granulatus and Paxillus involutus can release elements from wood ash. [ 19 ] Wood ash is sometimes used in the process of nixtamalization , where certain types of corn (typically maize or sorghum ) [ 20 ] [ 21 ] are soaked and cooked in an alkali solution to improve nutritional content and decrease risk of mycotoxins . The alkali solution has historically been made from wood ash lye. Nixtamalization was originally practiced in Mesoamerica , from which it spread northwards through various indigenous tribes of North America. In eastern North America, nixtamalized corn was traditionally eaten in porridges and stews, a dish that Europeans would call hominy . [ 22 ] Wood ash is also used as a preservative for some kinds of cheese, such as Morbier and Humboldt Fog. [ 23 ] [ 24 ] An early leavened bread was baked as early as 6000 BC by the Sumerians by placing the bread on heated stones and covering it with hot ash. The minerals in the wood ash could have supplemented the nutritional content of the dough as it was baked. [ 25 ] In present day, the amount of wood ash content in bread flour , as measured by the Chopin alveograph , [ 26 ] is strictly regulated by France . [ 27 ]
https://en.wikipedia.org/wiki/Wood_ash
Wood degradation is a complex process influenced by various biological , chemical , and environmental factors . It significantly impacts the durability and longevity of wood products and structures, necessitating effective preservation and protection strategies. It primarily involves fungi , bacteria , and insects . Fungi are the most significant agents, causing decay through the breakdown of wood's structural components, such as cellulose , hemicellulose , and lignin . [ 1 ] [ 2 ] Chemical degradation is likewise significant. Degradation of wood in a concrete matrix is mostly attributed to the affect of alkaline environment and hydrolysis of lignin and hemicellulose [ 3 ] [ 4 ] and elevated temperatures may accelerate the degradation process of the cell walls. [ 5 ] Applying preservatives, such as chromated copper arsenate (CCA) or borates , can protect wood from biological and chemical degradation. [ 6 ] Coatings, such as paints, varnishes, and water repellents, provide a barrier against moisture and UV radiation . Advanced coatings containing UV stabilizers and biocides offer enhanced protection. [ 7 ]
https://en.wikipedia.org/wiki/Wood_degradation
Wood gas is a fuel gas that can be used for furnaces, stoves, and vehicles. During the production process, biomass or related carbon-containing materials are gasified within the oxygen-limited environment of a wood gas generator to produce a combustible mixture. In some gasifiers this process is preceded by pyrolysis , where the biomass or coal is first converted to char , releasing methane and tar rich in polycyclic aromatic hydrocarbons . In stark contrast with synthesis gas , which is almost pure mixture of H 2 / CO , wood gas also contains a variety of organic compound ("distillates") that require scrubbing for use in other applications. Depending on the kind of biomass, a variety of contaminants are produced that will condense out as the gas cools. When producer gas is used to power cars and boats [ 1 ] or distributed to remote locations it is necessary to scrub the gas to remove the materials that can condense and clog carburetors and gas lines. Anthracite and coke are preferred for automotive use, because they produce the smallest amount of contamination, allowing smaller, lighter scrubbers to be used. The first wood gasifier was apparently built by Gustav Bischof in 1839. The first vehicle powered by wood gas was built by T.H. Parker in 1901. [ 2 ] Around 1900, many cities delivered fuel gases (centrally produced, typically from coal ) to residences. Natural gas came into use only in the 1930s. Wood gas vehicles were used during World War II as a consequence of the rationing of fossil fuels. In Germany alone, around 500,000 " producer gas " vehicles were in use at the end of the war. Trucks, buses, tractors, motorcycles, ships, and trains were equipped with a wood gasification unit. In 1942, when wood gas had not yet reached the height of its popularity, there were about 73,000 wood gas vehicles in Sweden, [ 3 ] 65,000 in France, 10,000 in Denmark, and almost 8,000 in Switzerland. In 1944, Finland had 43,000 "woodmobiles", of which 30,000 were buses and trucks, 7,000 private vehicles, 4,000 tractors and 600 boats. [ 4 ] Wood gasifiers are still manufactured in China and Russia for automobiles and as power generators for industrial applications. Trucks retrofitted with wood gasifiers are used in North Korea [ 5 ] in rural areas, particularly on the roads of the east coast. A wood gasifier takes wood chips, sawdust, charcoal, coal, rubber or similar materials as fuel and burns these incompletely in a fire box, producing wood gas, solid ash and soot , the latter of which have to be removed periodically from the gasifier. The wood gas can then be filtered for tars and soot/ash particles, cooled and directed to an engine or fuel cell . [ 6 ] Most of these engines have strict purity requirements of the wood gas, so the gas often has to pass through extensive gas cleaning in order to remove or convert, i.e. , " crack ", tars and particles. The removal of tar is often accomplished by using a water scrubber . Running wood gas in an unmodified gasoline-burning internal combustion engine may lead to problematic accumulation of unburned compounds. The quality of the gas from different "gasifiers" varies a great deal. Staged gasifiers, where pyrolysis and gasification occur separately instead of in the same reaction zone as was the case in the World War II gasifiers, can be engineered to produce essentially tar-free gas (less than 1 mg/m 3 ), while single-reactor fluidized bed gasifiers may exceed 50,000 mg/m³ tar. The fluidized bed reactors have the advantage of being much more compact, with more capacity per unit volume and price. Depending on the intended use of the gas, tar can be beneficial, as well by increasing the heating value of the gas. The heat of combustion of "producer gas" – a term used in the United States, meaning wood gas produced for use in a combustion engine – is rather low compared to other fuels. Taylor (1985) [ 7 ] reports that producer gas has a lower heat of combustion of 5.7 MJ/kg versus 55.9 MJ/kg for natural gas and 44.1 MJ/kg for gasoline. The heat of combustion of wood is typically 15–18 MJ/kg. Presumably, these values can vary somewhat from sample to sample. The same source reports the following chemical composition by volume which most likely is also variable: The composition of the gas is strongly dependent on the gasification process, the gasification medium (air, oxygen or steam), and the fuel moisture. Steam-gasification processes typically yield high hydrogen contents, downdraft fixed bed gasifiers yield high nitrogen concentrations and low tar loads, while updraft fixed bed gasifiers yield high tar loads. [ 6 ] [ 8 ] During the production of charcoal for blackpowder , the volatile wood gas is vented. Extremely-high-surface-area carbon results, suitable for use as a fuel in black powder.
https://en.wikipedia.org/wiki/Wood_gas
A wood gas generator is a gasification unit which converts timber or charcoal into wood gas , a producer gas consisting of atmospheric nitrogen , carbon monoxide , hydrogen , traces of methane , and other gases, which – after cooling and filtering – can then be used to power an internal combustion engine or for other purposes. Historically wood gas generators were often mounted on vehicles , but present studies and developments concentrate mostly on stationary plants. Gasification was an important and common technology during the 19th and early 20th century. Town gas produced from coal was widely used, mainly for lighting purposes. When stationary internal combustion engines based on the Otto cycle became available in the 1870s, they began displacing steam engines as prime movers in many works requiring stationary motive power. Adoption accelerated after the Otto engine's patent expired in 1886. The potential and practical applicability of gasification to internal combustion engines were well understood from the earliest days of their development. In 1873, Thaddeus S. C. Lowe developed and patented the water gas process by which large amounts of hydrogen gas could be generated for residential and commercial use in heating and lighting. Unlike the common coal gas, or coke gas which was used in municipal service, this gas provided a more efficient heating fuel. During the late 19th century internal combustion engines were commonly fueled by town gas, and during the early 20th century many stationary engines switched to using producer gas created from coke which was substantially cheaper than town gas which was based on the distillation (pyrolysis) of more expensive coal. During World War II , gasoline was rationed and in short supply. Due to the lack of gasoline from petroleum, older people recalled how to build gasifiers for both wood and coal, and how to convert internal combustion engines to run on gaseous fuel, and wood gas generators were in active production. In Great Britain, France, the United States and Germany, large numbers of such generators were constructed or improvised to convert wood and coal into fuel for vehicles. Commercial generators were in production before and after the war for use in special circumstances or in distressed economies . Some World War II era wood gas generators were of the "Imbert" downdraft type, designed around 1920 by French inventor Georges Imbert . Germany produced Gazogene units for vehicles including cars, trucks, artillery tractors and even tanks, to preserve the limited supplies of fuel. [ 1 ] Even in non-combatant countries, such as Sweden , Brazil or Spain , gasogene was popular, as oil became hard to obtain. In Brazil, a racer named Chico Landi won at São Paulo 's Interlagos circuit in 1944, driving a wood gas-powered Alfa Romeo. [ 2 ] Coal-based town gas production was generally displaced by petroleum-based gas and natural gas. However, Great Britain continued her use of coal-based town gas until the North Sea natural gas discoveries in the 1960s and 1970s. When oil prices rose there was renewed interest in wood gas generators. The US Federal Emergency Management Agency (FEMA) published a book in March 1989 describing how to build a gas generator in an emergency when oil was not available. [ 3 ] It described a design called the "stratified downdraft gasifier" which solves several drawbacks of earlier types. The European Union sponsored a wood gas project in Güssing , Austria, starting in 2005. This project was an electric power plant with a wood gas generator and a gas engine to convert the wood gas into 2 MW electric power and 4.5 MW heat. There was also an experimental device to use the Fischer–Tropsch process to convert wood gas to a diesel-like fuel . By October 2005, it was possible to convert 5 kg of wood into 1 litre of fuel. The Democratic People's Republic of Korea (North Korea) is also known to have trucks that run off of wood gas generators. The trucks are common outside of Pyongyang and are in rural villages and smaller towns. These trucks are utilized due to sanctions placed on North Korea regarding imports of oil and gas. [ 4 ] [ 5 ] [ 6 ] [ 7 ] There is a rich literature on gas-works, town-gas, gas-generation, wood-gas, and producer gas, that is now in the public domain due to its age. [ 8 ] Most successful wood gas generators in use in Europe and the United States are some variation of the earlier Imbert design. Wood gas generators often use wood; however, charcoal can also be used as a fuel. It is denser and produces a cleaner gas without the tarry volatiles and excessive water content of wood. The FEMA wood gas generator is (by definition of the FEMA manual) an emergency gasifier. It is designed to be rapidly assembled in a true fuel crisis. This simplified design has distinct benefits over the earlier European units such as easier refueling and construction but is less popular than the earlier Imbert design because of significant new problems, which include a lack of a fixed oxidization zone and allows the oxidization zone to creep to a larger area, causing a drop in temperature; a lower operating temperature leads to tar production and it lacks a true reduction zone further increasing this design's propensity to produce tar. Tar in the wood gas stream is considered a dirty gas and tar will gum up a motor quickly, possibly leading to stuck valves and rings. A new design known as the Keith gasifier improves on the FEMA unit, incorporating extensive heat recovery and eliminating the tar problem. Testing at Auburn University has shown it to be 37% more efficient than running gasoline. [ 9 ] This system set the world speed record for biomass powered vehicles [ 10 ] and has made several cross country tours. [ 11 ] [ 12 ] The United Nations produced the FOA 72 document [ 13 ] with details about their wood gas generator design and construction, as does World Bank technical paper 296. [ 14 ] Wood gas generators have a number of advantages over use of petroleum fuels: The disadvantages of wood gas generators are: When not carefully designed and used, there exists considerable potential for injury or death due to wood gas containing a large percentage of poisonous carbon monoxide (CO) gas. Wood gasifiers of proven design and thoroughly tested construction are considered safe to use outdoors, or in a partially enclosed space, for example, under a shelter open to the air on two sides; they may also be considered relatively safe to use in an extremely well ventilated (e.g. negative pressure ) indoor area not connected to any indoor area used for sleeping, equipped with redundant (more than 1), completely independent, battery-powered, regularly tested carbon-monoxide gas detectors . However, prudence must dictate that any sort of experimental wood gasifier design or new construction be thoroughly tested outdoors, and only outdoors, with a "buddy" at all times, and with constant vigilance for any sign of headache, drowsiness, or nausea, as these are the first symptoms of carbon monoxide poisoning. In addition, mixtures of excessive quantities of air and gas should be avoided as this could lead to the deflagration (explosion) of the gas in question if a combustion source is present. Long-term storage of wood-gas, except through the use of a gasholder -type water-displacement apparatus, should not be attempted, due to the volatile elements present in the gas, which, if allowed to excessively precipitate, will condense in the storage vessel. Under no circumstances should wood-gas ever be compressed to more than 1 bar (15 psi) above ambient, as this may induce condensation of volatiles, as well as lead to the likelihood of severe injury or death due to carbon monoxide or deflagration if the vessel leaks or fails. [ citation needed ] In 2008, an example of designing and constructing a working wood gas generator powered truck was shown on the National Geographic Channel's Planet Mechanics in the eighth episode, "Tree Powered Car". [ 15 ] [ 16 ] In 2009, another example of designing and constructing a working wood gas powered generator engine was in the TV series The Colony in the second episode of the first season "Power Struggle". Also used in the tenth episode "Exodus" to power an escape vehicle. A 2010, Mother Earth News article discussed and showed pictures of a wood gas powered engine installed in a pickup truck. [ 17 ] As part of the BBC science series "Bang Goes The Theory", a Volkswagen Scirocco was converted to a design by Martin Bacon to run on used coffee grounds , and after its build in 2010 was driven solely on coffee from London to Manchester successfully. Part of the team are now working on a more advanced design leaning towards top speed as opposed to range. On September 14, 2011, at the Bonneville Salt Flats a truck modified with a wood gas powered engine set a new world speed record for vehicles powered by wood gas with a speed of 73 mph. [ 18 ] On the popular US radio program Car Talk , a caller in episode 1201 (which aired on January 7, 2012, and was subsequently named "20 Miles Per Woodchip"), described a wood gas generating vehicle he rode in as a boy during World War II in Germany. The hosts were not familiar with the technology, likely because it was never widely adopted in the US. On March 12, 2012, on a season 2 episode of Doomsday Preppers , a wood gas generator is shown running a Ford truck and a house electric generator by prepper Scott Hunt on his multi-acre woodland property in South Carolina. Serbian TV sitcom "Truckdrivers 2" (="Kamiondzije II") from 1983. talks, as a part of plot, about a gas generator affixed to a chassis of a lorry. An article appeared in Mother Earth News in April 2012 featuring the Keith gasifier, showing pictures and talking with inventor Wayne Keith. [ 19 ] In the BBC documentary Wartime Farm , Episode 5 (aired October 2012) they built a coal gas powered ambulance according to the specifications of a 1943 gas powered vehicle. [ 20 ] In Season 3 Of Mountain Men On The History Channel, Eustice Conway is shown operating a wood gasified Toyota pickup for hauling goods The Finnish prime minister Juha Sipilä has received some media attention also for the wood gas powered car he built as a hobby prior to his political career. [ 21 ] The "El Kamina", model year 1987 Chevrolet El Camino , is the fastest wood-gas powered car, 140 km/h, 87 mph. [ 22 ] There are only a few companies that produce wood gasifier systems commercially. A list can be found here below. Since wood gas systems have the tendency of being rather large, most focus on stationary applications (electricity production). Some may be suitable for building into vehicles though. [ 23 ] [ 24 ]
https://en.wikipedia.org/wiki/Wood_gas_generator
Wood easily degrades without sufficient preservation. Apart from structural wood preservation measures, there are a number of different chemical preservatives and processes (also known as timber treatment , lumber treatment or pressure treatment ) that can extend the life of wood, timber , and their associated products, including engineered wood . These generally increase the durability and resistance from being destroyed by insects or fungi. As proposed by Richardson, [ 1 ] treatment of wood has been practiced for almost as long as the use of wood itself. There are records of wood preservation reaching back to ancient Greece during Alexander the Great 's rule, where bridge wood was soaked in olive oil . The Romans protected their ship hulls by brushing the wood with tar. During the Industrial Revolution , wood preservation became a cornerstone of the wood processing industry. Inventors and scientists such as Bethell, Boucherie, Burnett and Kyan made historic developments in wood preservation, with the preservative solutions and processes. Commercial pressure treatment began in the latter half of the 19th century with the protection of railroad cross-ties using creosote . Treated wood was used primarily for industrial, agricultural, and utility applications, where it is still used, until its use grew considerably (at least in the United States) in the 1970s, as homeowners began building decks and backyard projects. Innovation in treated timber products continues to this day, with consumers becoming more interested in less toxic materials. Wood that has been industrially pressure-treated with approved preservative products poses a limited risk to the public and should be disposed of properly. On December 31, 2003, the U.S. wood treatment industry stopped treating residential lumber with arsenic and chromium ( chromated copper arsenate , or CCA). This was a voluntary agreement with the United States Environmental Protection Agency . CCA was replaced by copper-based pesticides , with exceptions for certain industrial uses. [ 2 ] CCA may still be used for outdoor products like utility trailer beds and non-residential construction like piers, docks, and agricultural buildings. Industrial wood preservation chemicals are generally not available directly to the public and may require special approval to import or purchase, depending on the product and the jurisdiction where being used. In most countries, industrial wood preservation operations are notifiable industrial activities that require licensing from relevant regulatory authorities such as EPA or equivalent. Reporting and licensing conditions vary widely, depending on the particular chemicals used and the country of use. Although pesticides are used to treat lumber, preserving lumber protects natural resources (in the short term) by enabling wood products to last longer. Previous poor practices in industry have left legacies of contaminated ground and water around wood treatment sites in some cases. However, under currently approved industry practices and regulatory controls, such as implemented in Europe, North America, Australia, New Zealand, Japan and elsewhere, environmental impact of these operations should be minimal. [ neutrality is disputed ] [ citation needed ] Wood treated with modern preservatives is generally safe to handle, given appropriate handling precautions and personal protection measures. However, treated wood may present certain hazards in some circumstances, such as during combustion or where loose wood dust particles or other fine toxic residues are generated, or where treated wood comes into direct contact with food and agriculture. [ citation needed ] Preservatives containing copper in the form of microscopic particles have recently been introduced to the market, usually with "micronized" or "micro" trade names and designations such as MCQ or MCA. The manufacturers represent that these products are safe and EPA has registered these products. The American Wood Protection Association (AWPA) recommends that all treated wood be accompanied by a Consumer Information Sheet (CIS), to communicate safe handling and disposal instructions, as well as potential health and environmental hazards of treated wood. Many producers have opted to provide Material Safety Data Sheets (MSDS) instead. Although the practice of distributing MSDS instead of CIS is widespread, there is an ongoing debate regarding the practice and how to best communicate potential hazards and hazard mitigation to the end-user. Neither MSDS nor the newly adopted International Safety Data Sheets (SDS) are required for treated lumber under current U.S. Federal law. Chemical preservatives can be classified into three broad categories: Particulate ( micronised or dispersed) copper preservative technology has been introduced in the US and Europe. In these systems, copper is ground into micro sized particles and suspended in water rather than dissolved, as is the case with other copper products such as ACQ and copper azole. There are two particulate copper systems in production. One system uses a quat biocide system (known as MCQ) and is a derivative of ACQ. The other uses an azole biocide (known as MCA or μCA-C) derived from copper azole. Two particulate copper systems, one marketed as MicroPro and the other as Wolmanized using μCA-C formulation, have achieved Environmentally Preferable Product (EPP) certification. [ 3 ] [ 4 ] The EPP certification was issued by Scientific Certifications Systems (SCS) and is based on a comparative life-cycle impact assessments with an industry standard. The copper particle size used in the "micronized" copper beads ranges from 1 to 700 nm with an average under 300 nm. Larger particles (such as actual micron-scale particles) of copper do not adequately penetrate the wood cell walls. These micronized preservatives use nano particles of copper oxide or copper carbonate, for which there are alleged safety concerns. [ 5 ] An environmental group petitioned EPA in 2011 to revoke the registration of the micronized copper products, citing safety issues. [ 6 ] Alkaline copper quaternary (ACQ) is a preservative made of copper, a fungicide , and a quaternary ammonium compound (quat) like didecyl dimethyl ammonium chloride , an insecticide which also augments the fungicidal treatment. ACQ has come into wide use in the US, Europe, Japan and Australia following restrictions on CCA . [ 7 ] Its use is governed by national and international standards, which determine the volume of preservative uptake required for a specific timber end use. Since it contains high levels of copper, ACQ-treated timber is five times more corrosive to common steel . It is necessary to use fasteners meeting or exceeding requirements for ASTM A 153 Class D, such as ceramic-coated, as mere galvanized and even common grades of stainless steel corrode. The U.S. began mandating the use of non-arsenic containing wood preservatives for virtually all residential use timber in 2004. The American Wood Protection Association (AWPA) standards for ACQ require a retention of 0.15 lb/cu ft (2.4 kg/m 3 ) for above ground use and 0.40 lb/cu ft (6.4 kg/m 3 ) for ground contact. Chemical Specialties, Inc (CSI, now Viance) received U.S. Environmental Protection Agency 's Presidential Green Chemistry Challenge Award in 2002 for commercial introduction of ACQ. Its widespread use has eliminated major quantities of arsenic and chromium previously contained in CCA. Copper azole preservative (denoted as CA-B and CA-C under American Wood Protection Association/AWPA standards) is a major copper based wood preservative that has come into wide use in Canada, the US, Europe, Japan and Australia following restrictions on CCA. Its use is governed by national and international standards, which determine the volume of preservative uptake required for a specific timber end use. Copper azole is similar to ACQ with the difference being that the dissolved copper preservative is augmented by an azole co- biocide like organic triazoles such as tebuconazole or propiconazole , which are also used to protect food crops, instead of the quat biocide used in ACQ. [ 8 ] The azole co-biocide yields a copper azole product that is effective at lower retentions than required for equivalent ACQ performance. The general appearance of wood treated with copper azole preservative is similar to CCA with a green colouration. Copper azole treated wood is marketed widely under the Preserve CA and Wolmanized brands in North America, and the Tanalith brand across Europe and other international markets. The AWPA standard retention for CA-B is 0.10 lb/cu ft (1.6 kg/m 3 ) for above ground applications and 0.21 lb/cu ft (3.4 kg/m 3 ) for ground contact applications. Type C copper azole, denoted as CA-C, has been introduced under the Wolmanized and Preserve brands. The AWPA standard retention for CA-C is 0.06 lb/cu ft (0.96 kg/m 3 ) for above ground applications and 0.15 lb/cu ft (2.4 kg/m 3 ) for ground contact applications. Copper naphthenate , invented in Denmark in 1911, has been used effectively for many applications including: fencepost , canvas, nets, greenhouses, utility poles, railroad ties, beehives, and wooden structures in ground contact. Copper naphthenate is registered with the EPA as a non-restricted use pesticide, so there is no federal applicators licensing requirements for its use as a wood preservative. Copper Naphthenate can be applied by brush, dip, or pressure treatment. The University of Hawaii has found that copper naphthenate in wood at loadings of 1.5 lb/cu ft (24 kg/m 3 ) is resistant to Formosan termite attack. On February 19, 1981, the Federal Register outlined the EPA's position regarding the health risks associated with various wood preservatives. As a result, the National Park Service recommended the use of copper naphthenate in its facilities as an approved substitute for pentachlorophenol , creosote , and inorganic arsenicals . A 50-year study presented to AWPA in 2005 by Mike Freeman and Douglas Crawford says, "This study reassessed the condition of the treated wood posts in southern Mississippi, and statistically calculated the new expected post life span. It was determined that commercial wood preservatives, like pentachlorophenol in oil, creosote, and copper naphthenate in oil, provided excellent protection for posts, with life spans now calculated to exceed 60 years. Surprisingly, creosote and penta treated posts at 75% of the recommended AWPA retention, and copper naphthenate at 50% of the required AWPA retention, gave excellent performance in this AWPA Hazard Zone 5 site. Untreated southern pine posts lasted 2 years in this test site." [ 9 ] The AWPA M4 Standard for the care of preservative-treated wood products, reads, "The appropriateness of the preservation system for field treatment shall be determined by the type of preservative originally used to protect the product and the availability of a field treatment preservative. Because many preservative products are not packaged and labeled for use by the general public, a system different from the original treatment may need to be utilized for field treatment. Users shall carefully read and follow the instructions and precautions listed on the product label when using these materials. Copper naphthenate preservatives containing a minimum of 2.0% copper metal are recommended for material originally treated with copper naphthenate, pentachlorophenol, creosote, creosote solution or waterborne preservatives." [ 10 ] The M4 Standard has been adopted by the [ 11 ] International Code Council's (ICC) 2015 International Building Code (IBC) section 2303.1.9 Preservative-treated Wood, and 2015 International Residential Code (IRC) R317.1.1 Field Treatment. The American Association of State Highway and Transportation Officials AASHTO has also adopted the AWPA M4 Standard. A waterborne copper naphthenate is sold to consumers under the tradename QNAP 5W. Oilborne copper naphthenates with 1% copper as metal solutions are sold to consumers under the tradenames Copper Green, and Wolmanized Copper Coat, a 2% copper as metal solution is sold under the tradename Tenino. In CCA treatment, copper is the primary fungicide , arsenic is a secondary fungicide and an insecticide , and chromium is a fixative which also provides ultraviolet (UV) light resistance. Recognized for the greenish tint it imparts to timber, CCA is a preservative that was very common for many decades. In the pressure treatment process , an aqueous solution of CCA is applied using a vacuum and pressure cycle, and the treated wood is then stacked to dry. During the process, the mixture of oxides reacts to form insoluble compounds, helping with leaching problems. The process can apply varying amounts of preservative at varying levels of pressure to protect the wood against increasing levels of attack. Increasing protection can be applied (in increasing order of attack and treatment) for: exposure to the atmosphere, implantation within soil, or insertion into a marine environment. In the last decade concerns were raised that the chemicals may leach from the wood into surrounding soil , resulting in concentrations higher than naturally occurring background levels. A study cited in Forest Products Journal found 12–13% of the chromated copper arsenate leached from treated wood buried in compost during a 12-month period. Once these chemicals have leached from the wood, they are likely to bind to soil particles, especially in soils with clay or soils that are more alkaline than neutral. In the United States the US Consumer Product Safety Commission issued a report in 2002 stating that exposure to arsenic from direct human contact with CCA treated wood may be higher than was previously thought. On 1 January 2004, the Environmental Protection Agency (EPA) in a voluntary agreement with industry began restricting the use of CCA in treated timber in residential and commercial construction, with the exception of shakes and shingles , permanent wood foundations , and certain commercial applications. This was in an effort to reduce the use of arsenic and improve environmental safety, although the EPA were careful to point out that they had not concluded that CCA treated wood structures in service posed an unacceptable risk to the community. The EPA did not call for the removal or dismantling of existing CCA treated wood structures. In Australia, the Australian Pesticides and Veterinary Medicines Authority (APVMA [ 12 ] ) restricted the use of CCA preservative for treatment of timber used in certain applications from March 2006. CCA may no longer be used to treat wood used in 'intimate human contact' applications such as children's play equipment, furniture, residential decking and handrailing. Use for low contact residential, commercial and industrial applications remains unrestricted, as does its use in all other situations. The APVMA decision to restrict the use of CCA in Australia was a precautionary measure, even though the report [ 13 ] found no evidence that demonstrated CCA treated timber posed unreasonable risks to humans in normal use. Similarly to the US EPA, the APVMA did not recommend dismantling or removal of existing CCA treated wood structures. In Europe, Directive 2003/2/EC restricts the marketing and use of arsenic, including CCA wood treatment. CCA treated wood is not permitted to be used in residential or domestic constructions. It is permitted for use in various industrial and public works, such as bridges, highway safety fencing, electric power transmission and telecommunications poles. In the United Kingdom waste timber treated with CCA was classified in July 2012 as hazardous waste by the department for the Environment, Food and Rural Affairs. [ 14 ] These include copper HDO (Bis-(N-cyclohexyldiazeniumdioxy)-copper or CuHDO), copper chromate , copper citrate , acid copper chromate, and ammoniacal copper zinc arsenate (ACZA). The CuHDO treatment is an alternative to CCA, ACQ and CA used in Europe and in approval stages for United States and Canada. ACZA is generally used for marine applications. Boric acid , oxides and salts ( borates ) are effective wood preservatives and are supplied under numerous brand names throughout the world. One of the most common compounds used is disodium octaborate tetrahydrate Na 2 B 8 O 13 ·4H 2 O , commonly abbreviated DOT. Borate treated wood is of low toxicity to humans, and does not contain copper or other heavy metals. However, unlike most other preservatives, borate compounds do not become fixed in the wood and can be partially leached out if exposed repeatedly to water that flows away rather than evaporating (evaporation leaves the borate behind so is not a problem). Even though leaching will not normally reduce boron concentrations below effective levels for preventing fungal growth, borates should not be used where they will be exposed to repeated rain, water or ground contact unless the exposed surfaces are treated to repel water. [ 15 ] Zinc-borate compounds are less susceptible to leaching than sodium-borate compounds, but are still not recommended for below-ground use unless the timber is first sealed. [ 16 ] Recent interest in low toxicity timber for residential use, along with new regulations restricting some wood preservation agents, has resulted in a resurgence of the use of borate treated wood for floor beams and internal structural members. Researchers at CSIRO in Australia have developed organoborates which are much more resistant to leaching, while still providing timber with good protection from termite and fungal attack. [ 17 ] [ 18 ] The cost of the production of these modified borates will limit their widespread take-up but they are likely to be suitable for certain niche applications, especially where low mammalian toxicity is of paramount importance. Recent concerns about the health and environmental effects of metallic wood preservatives have created a market interest in non-metallic wood preservatives such as propiconazole - tebuconazole - imidacloprid better known as PTI. The American Wood Protection Association (AWPA) standards for PTI require a retention of 0.018 lb/cu ft (0.29 kg/m 3 ) for above ground use and 0.013 lb/cu ft (0.21 kg/m 3 ) when applied in combination with a wax stabilizer. The AWPA has not developed a standard for a PTI ground contact preservative, so PTI is currently limited to above ground applications such as decks. All three of the PTI components are also used in food crop applications. The very low required amounts of PTI in pressure treated wood further limits effects and substantially decreases the freight costs and associated environmental impacts for shipping preservative components to the pressure treating plants. The PTI preservative imparts very little color to the wood. Producers generally add a color agent or a trace amount of copper solution so as to identify the wood as pressure treated and to better match the color of other pressure treated wood products. The PTI wood products are very well adapted for paint and stain applications with no bleed-through. The addition of the wax stabilizer allows a lower preservative retention plus substantially reduces the tendency of wood to warp and split as it dries. In combination with normal deck maintenance and sealer applications, the stabilizer helps maintain appearance and performance over time. PTI pressure treated wood products are no more corrosive than untreated wood and are approved for all types of metal contact, including aluminum. PTI pressure treated wood products are relatively new to the market place and are not yet widely available in building supply stores. However, there are some suppliers selling PTI products for delivery anywhere in the US on a job lot order basis. Sodium silicate is produced by fusing sodium carbonate with sand or heating both ingredients under pressure. It has been in use since the 19th century. It can be a deterrent against insect attack and possesses minor flame-resistant properties; however, it is easily washed out of wood by moisture, forming a flake-like layer on top of the wood. Timber Treatment Technology, LLC , markets TimberSIL, a sodium silicate wood preservative. The TimberSIL proprietary process surrounds the wood fibers with a protective, non-toxic, amorphous glass matrix. The result is a product the company calls "Glass Wood," which they claim is Class A fire-retardant , chemically inert, rot and decay resistant, and superior in strength to untreated wood. [ 19 ] Timbersil is currently involved in litigation over its claims. [ 20 ] [ 21 ] There are a number of European natural paint fabricants that have developed potassium silicate (potassium waterglass) based preservatives. They frequently include boron compounds, cellulose, lignin and other plant extracts. They are a surface application with a minimal impregnation for internal use. In Australia, a water-based bifenthrin preservative has been developed to improve the insect resistance of timber. As this preservative is applied by spray, it only penetrates the outer 2 mm of the timber cross-section. Concerns have been raised as to whether this thin-envelope system will provide protection against insects in the longer term, particularly when exposed to sunlight for extended periods. The fireproofing of wood utilizes a fire retardant chemical that remains stable in high temperature environments. The fire retardant is applied under pressure at a wood treating plant like the preservatives described above, or applied as a surface coating. In both cases, treatment provides a physical barrier to flame spread. The treated wood chars but does not oxidize. Effectively this creates a convective layer that transfers flame heat to the wood in a uniform way which significantly slows the progress of fire to the material. There are several commercially available wood-based construction materials using pressure-treatment (such as those marketed in the United States and elsewhere under the trade names of 'FirePro', 'Burnblock' 'Wood-safe, 'Dricon', 'D-Blaze,' and 'Pyro-Guard'), as well as factory-applied coatings under the trade names of 'PinkWood' and 'NexGen'. Some site-applied coatings as well as brominated fire retardants have lost favor due to safety concerns as well as concerns surrounding the consistency of application. Specialized treatments also exist for wood used in weather-exposed applications. The only impregnation-applied fire retardant commercially available in Australia is 'NexGen'. 'Guardian', which used calcium formate as a 'powerful wood modifying agent', was removed from sale in early 2010 for unspecified reasons. These include pentachlorophenol ("penta") and creosote . They emit a strong petrochemical odor and are generally not used in consumer products. Both of these pressure treatments routinely protect wood for 40 years in most applications. Creosote was the first wood preservative to gain industrial importance more than 150 years ago and it is still widely used today for protection of industrial timber components where long service life is essential. Creosote is a tar -based preservative that is commonly used for utility poles and railroad ties or sleepers . Creosote is one of the oldest wood preservatives, and was originally derived from a wood distillate , but now, virtually all creosote is manufactured from the distillation of coal tar . Creosote is regulated as a pesticide , and is not usually sold to the general public. In recent years in Australia and New Zealand, linseed oil has been incorporated in preservative formulations as a solvent and water repellent to "envelope treat" timber. This involves just treating the outer 5 mm of the cross-section of a timber member with preservative (e.g., permethrin 25:75), leaving the core untreated. While not as effective as CCA or LOSP methods, envelope treatments are significantly cheaper, as they use far less preservative. Major preservative manufacturers add a blue (or red) dye to envelope treatments. Blue colored timber is for use south of the Tropic of Capricorn and red for elsewhere. The colored dye also indicates that the timber is treated for resistance to termites/white ants. There is an ongoing promotional campaign in Australia for this type of treatment. This class of timber treatments use white spirit , or light oils such as kerosene , as the solvent carrier to deliver preservative compounds into timber. Synthetic pyrethroids are typically used as an insecticide, such as permethrin, bifenthrin or deltamethrin. In Australia and New Zealand, the most common formulations use permethrin as an insecticide, and propiconazole and tebuconazole as fungicides. While still using a chemical preservative, this formulation contains no heavy-metal compounds. With the introduction of strict volatile organic compound (VOC) laws in the European Union, LOSPs have disadvantages due to the high cost and long process times associated with vapour-recovery systems. LOSPs have been emulsified into water-based solvents. While this does significantly reduce VOC emissions, the timber swells during treatment, removing many of the advantages of LOSP formulations. Various epoxy resins usually thinned with a solvent like acetone or methyl ethyl ketone (MEK) can be used to both preserve and seal wood. The wood coatings market in general will exceed $12 billion by 2027. [ 22 ] Biological modified timber is treated with biopolymers from agricultural waste. After drying and curing, the soft timber becomes durable and strong. With this process fast growing pinewood acquires properties similar to tropical hardwood. Production facilities for this process are in The Netherlands and is known under the trade name “NobelWood”. From agricultural waste, like sugarcane bagasse, furfuryl alcohol is manufactured. Theoretically this alcohol can be from any fermented bio-mass waste and therefore can be called a green chemical. After condensation reactions pre-polymers are formed from furfuryl alcohol. Fast growing softwood is impregnated with the water-soluble bio-polymer. After impregnation the wood is dried and heated which initiates a polymerisation reaction between the bio-polymer and the wood cells. This process results in wood cells which are resistant to microorganisms. At the moment the only timber species which is being used for this process is Pinus radiata . This is the fastest growing tree species on Earth that has a porous structure which is particularly suitable for impregnation processes. The technique is applied to timber mainly for the building industry as a cladding material. The technique is being further developed in order to reach similar physical and biological properties of other polyfurfuryl impregnated wood species. Besides the impregnation with the biopolymers the timber can also be impregnated with fire retardant resins. This combination creates a timber with durability class I and a fire safety certification of Euro class B. Chemical modification of wood at the molecular level has been used to improve its performance properties. Many chemical reaction systems for the modification of wood, especially those using various types of anhydrides , have been published; however, the reaction of wood with acetic anhydride has been the most studied. [ 23 ] [ 24 ] [ 25 ] The physical properties of any material are determined by its chemical structure. Wood contains an abundance of chemical groups called free hydroxyls . Free hydroxyl groups readily absorb and release water according to changes in the climatic conditions to which they are exposed. This is the main reason why wood's dimensional stability is impacted by swelling and shrinking. It is also believed that the digestion of wood by enzymes initiates at the free hydroxyl sites, which is one of the principal reasons why wood is prone to decay. [ 26 ] Acetylation effectively changes the compounds with free hydroxyls within wood into acetate esters . [ 27 ] This is done by reacting the wood with acetic anhydride , which comes from acetic acid . When free hydroxyl groups are transformed to acetoxy groups, the ability of the wood to absorb water is greatly reduced, rendering the wood more dimensionally stable and, because it is no longer digestible, extremely durable. In general, softwoods naturally have an acetyl content from 0.5 to 1.5% and more durable hardwoods from 2 to 4.5%. Acetylation takes wood well beyond these levels with corresponding benefits. These include an extended coatings life due to acetylated wood acting as a more stable substrate for paints and translucent coatings. acetylated wood is non-toxic and does not have the environmental issues associated with traditional preservation techniques. The acetylation of wood was first done in Germany in 1928 by Fuchs. In 1946, Tarkow, Stamm and Erickson first described the use of wood acetylation to stabilize wood from swelling in water. Since the 1940s, many laboratories around the world have looked at acetylation of many different types of woods and agricultural resources. In spite of the vast amount of research on chemical modification of wood, and, more specifically, on the acetylation of wood, commercialization did not come easily. The first patent on the acetylation of wood was filed by Suida in Austria in 1930. Later, in 1947, Stamm and Tarkow filed a patent on the acetylation of wood and boards using pyridine as a catalyst. In 1961, the Koppers Company published a technical bulletin on the acetylation of wood using no catalysis, but with an organic cosolvent [ 28 ] In 1977, in Russia, Otlesnov and Nikitina came close to commercialization, but the process was discontinued, presumably because cost-effectiveness could not be achieved. In 2007, Titan Wood, a London-based company, with production facilities in The Netherlands, achieved cost-effective commercialization and began large-scale production of acetylated wood under the trade name "Accoya". [ 29 ] Copper plating or copper sheathing is the practice of covering wood, most commonly wooden hulls of ships, with copper metal. As metallic copper is both repellent and toxic to fungus, insects such as termites, and marine bi-valves this would preserve the wood and also act as an anti-fouling measure to prevent aquatic life from attaching to the ship's hull and reducing a ship's speed and maneuverability. Modern marine bottom paints often incorporate a significant amount of copper in their formulations for the same reason, although they are not recommended for aluminum hulls because of the possibilities for galvanic corrosion . [ 30 ] These species are resistant to decay in their natural state, due to high levels of organic chemicals called extractives , mainly polyphenols , providing them antimicrobial properties. [ 31 ] Extractives are chemicals that are deposited in the heartwood of certain tree species as they convert sapwood to heartwood ; they are present in both parts though. [ 32 ] Huon pine ( Lagarostrobos franklinii ), merbau ( Intsia bijuga ), ironbark ( Eucalyptus spp.), totara ( Podocarpus totara ), puriri ( Vitex lucens ), kauri ( Agathis australis ), and many cypresses , such as coast redwood ( Sequoia sempervirens ) and western red cedar ( Thuja plicata ), fall in this category. However, many of these species tend to be prohibitively expensive for general construction applications. Huon pine was used for ship hulls in the 19th century, but over-harvesting and Huon pine's extremely slow growth rate makes this now a specialty timber. Huon pine is so rot resistant that fallen trees from many years ago are still commercially valuable. Merbau is still a popular decking timber and has a long life in above ground applications, but it is logged in an unsustainable manner and is too hard and brittle for general use. Ironbark is a good choice where available. It is harvested from both old-growth and plantation in Australia and is highly resistant to rot and termites . It is most commonly used for fence posts and house stumps. Eastern red cedar ( Juniperus virginiana ) and black locust ( Robinia pseudoacacia ) have long been used for rot-resistant fence posts and rails in eastern United States , with the black locust also planted in modern times in Europe. Coast redwood is commonly used for similar applications in the western United States . Totara and puriri were used extensively in New Zealand during the European colonial era when native forests were "mined", even as fence posts of which many are still operating. Totara was used by the Māori to build large waka (canoes). Today, they are specialty timbers as a result of their scarcity, although lower grade stocks are sold for landscaping use. Kauri is a superb timber for building the hulls and decks of boats. It too is now a specialty timber and ancient logs (in excess of 3 000 years) that have been mined from swamps are used by wood turners and furniture makers. The natural durability or rot and insect resistance of wood species is always based on the heartwood (or "truewood"). The sapwood of all timber species should be considered to be non-durable without preservative treatment. Natural substances, purified from naturally rot-resistant trees and responsible for natural durability, also known as natural extractives , are another promising wood preservatives. Several compounds have been described to be responsible for natural durability, including different polyphenols , lignins , lignans (such as gmelinol , plicatic acid ), hinokitiol , α-cadinol and other sesquiterpenoids , flavonoids (such as mesquitol ), and other substances. [ 33 ] [ 34 ] [ 35 ] These compounds are mostly identified in the heartwood , although they are also present in minimal concentrations in the sapwood . [ 36 ] Tannins , which have also shown to act as protectants, are present in the bark of trees. [ 37 ] Treatment of timber with natural extractives, such as hinokitiol , tannins , and different tree extracts, has been studied and proposed to be another environmentally-friendly wood preservation method. [ 38 ] [ 39 ] [ 40 ] [ 41 ] Tung oil has been used for hundreds of years in China , where it was used as a preservative for wood ships. The oil penetrates the wood, and then hardens to form an impermeable hydrophobic layer up to 5 mm into the wood. As a preservative it is effective for exterior work above and below ground, but the thin layer makes it less useful in practice. It is not available as a pressure treatment. By going beyond kiln drying wood, heat treatment may make timber more durable. By heating timber to a certain temperature, it may be possible to make the wood fibre less appetizing to insects. Heat treatment can also improve the properties of the wood with respect to water, with lower equilibrium moisture, less moisture deformation, and weather resistance. It is weather-resistant enough to be used unprotected, in facades or in kitchen tables, where wetting is expected. However, heating can reduce the amount of volatile organic compounds, [ 32 ] which generally have antimicrobial properties. [ 42 ] There are four similar heat treatments — Westwood, developed in the United States; Retiwood, developed in France; Thermowood, developed in Finland by VTT; and Platowood, developed in The Netherlands. These processes autoclave the treated wood, subjecting it to pressure and heat, along with nitrogen or water vapour to control drying in a staged treatment process ranging from 24 to 48 hours at temperatures of 180 °C to 230 °C depending on timber species. These processes increase the durability, dimensional stability and hardness of the treated wood by at least one class; however, the treated wood is darkened in colour, and there are changes in certain mechanical characteristics: Specifically, the modulus of elasticity is increased to 10%, and the modulus of rupture is diminished by 5% to 20%. [ 43 ] [ 44 ] Thus, the treated wood requires drilling for nailing to avoid splitting the wood. Certain of these processes cause less impact than others in their mechanical effects upon the treated wood. Wood treated with this process is often used for cladding or siding, flooring, furniture and windows. For the control of pests that may be harbored in wood packaging material (i.e. crates and pallets ), the ISPM 15 requires heat treatment of wood to 56 °C for 30 minutes to receive the HT stamp . This is typically required to ensure the killing of the pine wilt nematode and other kinds of wood pests that could be transported internationally. Wood and bamboo can be buried in mud to help protect them from insects and decay. This practice is used widely in Vietnam to build farm houses consisting of a wooden structural frame, a bamboo roof frame and bamboo with mud mixed with rice hay for the walls. While wood in contact with soil will generally decompose more quickly than wood not in contact with it, it is possible that the predominantly clay soils prevalent in Vietnam provide a degree of mechanical protection against insect attack, which compensates for the accelerated rate of decay. Also, since wood is subject to bacterial decay only under specific temperature and moisture content ranges, submerging it in water-saturated mud can retard decay, by saturating the wood's internal cells beyond their moisture decay range. Probably the first attempts made to protect wood from decay and insect attack consisted of brushing or rubbing preservatives onto the surfaces of the treated wood. Through trial and error the most effective preservatives and application processes were slowly determined. In the Industrial Revolution, demands for such things as telegraph poles and railroad ties (UK: railway sleepers) helped to fuel an explosion of new techniques that emerged in the early 19th century. The sharpest rise in inventions took place between 1830 and 1840, when Bethell, Boucherie, Burnett and Kyan were making wood-preserving history. Since then, numerous processes have been introduced or existing processes improved. The goal of modern-day wood preservation is to ensure a deep, uniform penetration with reasonable cost, without endangering the environment. The most widespread application processes today are those using artificial pressure through which many woods are being effectively treated, but several species (such as spruce, Douglas-fir, larch, hemlock and fir) are very resistant to impregnation. With the use of incising, the treatment of these woods has been somewhat successful but with a higher cost and not always satisfactory results. One can divide the wood-preserving methods roughly into either non-pressure processes or pressure processes. There are numerous non-pressure processes of treating wood which vary primarily in their procedure. The most common of these treatments involve the application of the preservative by means of brushing or spraying, dipping, soaking, steeping or by means of hot and cold bath. There is also a variety of additional methods involving charring, applying preservatives in bored holes, diffusion processes and sap displacement. Brushing preservatives is a long-practised method and often used in today's carpentry workshops. Technological developments mean it is also possible to spray preservative over the surface of the timber. Some of the liquid is drawn into the wood as the result of capillary action before the spray runs off or evaporates, but unless puddling occurs penetration is limited and may not be suitable for long-term weathering. By using the spray method, coal-tar creosote, oil-borne solutions and water-borne salts (to some extent) can also be applied. A thorough brush or spray treatment with coal-tar creosote can add 1 to 3 years to the lifespan of poles or posts. Two or more coats provide better protection than one, but the successive coats should not be applied until the prior coat has dried or soaked into the wood. The wood should be seasoned before treatment. Dipping consists of simply immersing the wood in a bath of creosote or other preservative for a few seconds or minutes. Similar penetrations to that of brushing and spraying processes are achieved. It has the advantage of minimizing hand labor. It requires more equipment and larger quantities of preservative and is not adequate for treating small lots of timber. Usually the dipping process is useful in the treatment of window sashes and doors. Except for copper naphthenate, treatment with copper salt preservative is no longer allowed with this method. In this process the wood is submerged in a tank of water-preservative mix, and allowed to soak for a longer period of time (several days to weeks). This process was developed in the 19th century by John Kyan . The depth and retention achieved depends on factors such as species, wood moisture, preservative and soak duration. The majority of the absorption takes place during the first two or three days, but will continue at a slower pace for an indefinite period. As a result, the longer the wood can be left in the solution, the better treatment it will receive. When treating seasoned timber, both the water and the preservative salt soak into the wood, making it necessary to season the wood a second time. Posts and poles can be treated directly on endangered areas, but should be treated at least 30 cm (0.98 ft) above the future ground level. The depth obtained during regular steeping periods varies from 5 to 10 mm (0.20 to 0.39 in) up to 30 mm (1.2 in) by sap pine. Due to the low absorption, solution strength should be somewhat stronger than that in pressure processes, around 5% for seasoned timber and 10% for green timber (because the concentration slowly decreases as the chemicals diffuse into the wood). The solution strength should be controlled continually and, if necessary, be corrected with the salt additive. After the timber is removed from the treatment tank, the chemical will continue to spread within the wood if it has sufficient moisture content. The wood should be weighed down and piled so that the solution can reach all surfaces. (Sawed materials stickers should be placed between every board layer.) This process finds minimal use despite its former popularity in continental Europe and Great Britain . Named after John Howard Kyan , who patented this process in England in 1833, Kyanizing consists of steeping wood in a 0.67% mercuric chloride preservative solution. It is no longer used. Patented by Charles A. Seely, this process achieves treatment by immersing seasoned wood in successive baths of hot and cold preservatives. During the hot baths, the air expands in the timbers. When the timbers are changed to the cold bath (the preservative can also be changed) a partial vacuum is created within the lumen of the cells, causing the preservative to be drawn into the wood. Some penetration occurs during the hot baths, but most of it takes place during the cold baths. This cycle is repeated with a significant time reduction compared to other steeping processes. Each bath may last 4 to 8 hours or in some cases longer. The temperature of the preservative in the hot bath should be between 60 to 110 °C (140 to 230 °F) and 30 to 40 °C (86 to 104 °F) in the cold bath (depending on preservative and tree species). The average penetration depths achieved with this process ranges from 30 to 50 mm (1.2 to 2.0 in). Both preservative oils and water-soluble salts can be used with this treatment. Due to the longer treatment periods, this method finds little use in the commercial wood preservation industry today. As explained in Uhlig's Corrosion Handbook, this process involves two or more chemical baths that undergo a reaction with the cells of the wood, and result in the precipitation of preservative into the wood cells. Two chemicals commonly employed in this process are copper ethanolamine, and sodium dimethyldithiocarbamate, which reacts to precipitate copper dimetyldithiocarbamate. The precipitated preservative is very resistant to leeching. Since its use in the mid-1990s, it has been discontinued in the United States of America, but it never saw commercialization in Canada. [ 45 ] Pressure processes are the most permanent method around today in preserving timber life. Pressure processes are those in which the treatment is carried out in closed cylinders with applied pressure or vacuum. These processes have a number of advantages over the non-pressure methods. In most cases, a deeper and more uniform penetration and a higher absorption of preservative is achieved. Further, the treating conditions can be controlled so that retention and penetration can be varied. These pressure processes can be adapted to large-scale production. The high initial costs for equipment and the energy costs are the biggest disadvantages. These treatment methods are used to protect ties, poles and structural timbers and find use throughout the world today. The various pressure processes that are used today differ in details, but the general method is in all cases the same. The treatment is carried out in cylinders. The timbers are loaded onto special tram cars, so called buggies or bogies , and into the cylinder. These cylinders are then set under pressure often with the addition of higher temperature. As final treatment, a vacuum is frequently used to extract excess preservatives. These cycles can be repeated to achieve better penetration. Light organic solvent preservative (LOSP) treatments often use a vacuum impregnation process. This is possible because of the lower viscosity of the white-spirit carrier used. In the full-cell process, the intent is to keep as much of the liquid absorbed into the wood during the pressure period as possible, thus leaving the maximum concentration of preservatives in the treated area. Usually, water solutions of preservative salts are employed with this process, but it is also possible to impregnate wood with oil. The desired retention is achieved by changing the strength of the solution. William Burnett patented this development in 1838 of full-cell impregnation with water solutions. The patent covered the use of zinc chloride on water basis, also known as Burnettizing . A full-cell process with oil was patented in 1838 by John Bethell. His patent described the injection of tar and oils into wood by applying pressure in closed cylinders. This process is still used today with some improvements. Contrary to the static full-cell and empty-cell processes, the fluctuation process is a dynamic process. By this process the pressure inside the impregnation cylinder changes between pressure and vacuum within a few seconds. There have been inconsistent claims that through this process it is possible to reverse the pit closure by spruce. However, the best results that have been achieved with this process by spruce do not exceed a penetration deeper than 10 mm (0.39 in). Specialized equipment is necessary and therefore higher investment costs are incurred. Developed by Dr. Boucherie of France in 1838, this approach consisted of attaching a bag or container of preservative solution to a standing or a freshly cut tree with bark, branches, and leaves still attached, thereby injecting the liquid into the sap stream. Through transpiration of moisture from the leaves the preservative is drawn upward through the sapwood of the tree trunk. The modified Boucherie process consists of placing freshly cut, unpeeled timbers onto declining skids, with the stump slightly elevated, then fastening watertight covering caps or boring a number of holes into the ends, and inserting a solution of copper sulfate or other waterborne preservative into the caps or holes from an elevated container. Preservative oils tend to not penetrate satisfactorily by this method. The hydrostatic pressure of the liquid forces the preservative lengthwise into and through the sapwood, thus pushing the sap out of the other end of the timber. After a few days, the sapwood is completely impregnated; unfortunately little or no penetration takes place in the heartwood. Only green wood can be treated in this manner. This process has found considerable usage to impregnate poles and also larger trees in Europe and North America, and has experienced a revival of usage to impregnate bamboo in countries such as Costa Rica, Bangladesh, India and the state of Hawaii. Developed in the Philippines, this method (abbreviated HPSD) consists of a cylinder pressure cap made from a 3 mm thick mild steel plate secured with 8 sets of bolts, a 2-HP diesel engine, and a pressure regulator with 1.4–14 kg/m 2 capacity. The cap is placed over the stump of a pole, tree or bamboo and the preservative is forced into the wood with pressure from the engine. First tested and patented in 1911 and 1912, this process consists of making shallow, slit-like holes in the surfaces of material to be treated, so that deeper and more uniform penetration of preservative may be obtained. Incisions made in sawed material usually are parallel with the grain of the wood. This process is common in North America (since the 1950s), where Douglas-fir products and pole butts of various species are prepared before treatment. It is most useful for woods that are resistant to side penetration, but allow preservative transport along the grain. In the region in which it is produced, it is common practice to incise all sawed Douglas-fir 3 in (76 mm) or more in thickness before treatment. Unfortunately, the impregnation of spruce, the most important structural timber in large areas in Europe, has shown that unsatisfactory treatment depths have been achieved with impregnation. The maximum penetration of 2 mm (0.079 in) is not sufficient to protect wood in weathered positions. The present-day incising machines consist essentially of four revolving drums fitted with teeth or needles or with lasers that burn the incisions into the wood. Preservatives can be spread along the grain up to 20 mm (0.79 in) in radial and up to 2 mm (0.079 in) in tangential and radial direction. In North America, where smaller timber dimensions are common, incision depths of 4 to 6 mm (0.16 to 0.24 in) have become standard. In Europe, where larger dimensions are widespread, incision depths of 10 to 12 mm (0.39 to 0.47 in) are necessary. The incisions are visible and often considered to be wood error. Incisions by laser are significantly smaller than those of spokes or needles. The costs for each process type are approximately for spoke/conventional all-round incising €0.50/m 2 , by laser incising €3.60/m 2 and by needle incision €1.00/m 2 . (Figures originate from the year 1998 and may vary from present day prices.) An alternative increases the permeability of timber using microwave technology. There is some concern that this method may adversely affect the structural performance of the material. Research in this area has been conducted by the Cooperative Research Centre at the University of Melbourne, Australia. Charring of timber results in surfaces which are fire-resistant, insect-resistant and proof against weathering. Wood surfaces are ignited using a hand-held burner or moved slowly across a fire. The charred surface is then cleaned using a steel brush to remove loose bits and to expose the grain. Oil or varnish may be applied if required. [ 46 ] Charring wood with a red-hot iron is a traditional method in Japan , where it is called yakisugi or shō sugi ban (literally 'fire cypress').
https://en.wikipedia.org/wiki/Wood_preservation
Wood science [ 1 ] is the scientific field which predominantly studies and investigates elements associated with the formation, the physical and chemical composition, and the macro- and microstructure of wood as a bio-based and lignocellulosic material. Wood science additionally delves into the biological, chemical, physical, and mechanical properties and characteristics of wood as a natural material . [ 2 ] [ 3 ] Deep understanding of wood plays a pivotal role in several endeavors such as the processing of wood , the production of wood-based materials like particleboard , fiberboard , OSB , plywood and other materials, as well as the utilization of wood and wood-based materials in construction and a wide array of products, including pulpwood , furniture , engineered wood products , such as glued laminated timber , CLT , LVL , PSL , as well as pellets , briquettes, and numerous wood-derived products. Initial comprehensive investigations in the field of wood science emerged at the start of the 20th century. In 1902, the Wood Processing Laboratory was founded in the Department of Forestry at Tokyo University and academic studies on wood processing were first initiated. The Forest and Forest Products Research Institute in Tokyo was also established in 1905. [ 4 ] In 1906 the Forest Products Research Institute was created in Dehradun , India . The advent of contemporary wood research commenced in 1910, when the Forest Products Laboratory (FPL) was established in Madison, Wisconsin , USA . [ 5 ] The Forest Products Laboratory played a fundamental role in wood science providing scientific research on wood and wood products in partnership with academia, industry, local and other institutions in North and South America and worldwide. [ 6 ] [ 7 ] [ 8 ] In the following years, many wood research institutes came into existence across almost all industrialized nations. A general overview of these institutes and laboratories is shown below: [ 9 ] From the '60s, the founding of research institutes in the field of wood sciences continued in many universities, and also in universities of applied sciences and technological universities. Today, the International Academy of Wood Science (IAWS), a recognised and non-profit assembly of wood scientists, represents worldwide the scientific area of wood science and all of its associated technological domains. [ 10 ] [ 11 ] The field of wood science can be categorized into three distinct sub-areas, which include: [ 12 ] Below are some of the significant scientific journals within the areas of wood sciences: [ 15 ]
https://en.wikipedia.org/wiki/Wood_science
Wood stabilization is a number of processes which use pressure and/or vacuum to impregnate wood cellular structure with certain monomers , acrylics , phenolics or other resins [ 1 ] to improve dimensional stability , biological durability, hardness, and other material properties. [ 2 ] When exposed to moisture through humidity absorption or direct immersion, most wood species will swell and change shape. [ 3 ] When moisture comes into contact with wood, the water molecules penetrate the cell wall and become bound to cell wall components through hydrogen bonding. With addition of water to the cell wall, wood volume increases nearly proportionally to the volume of water added. Swelling increases until the fiber saturation point has been reached. [ 4 ] Wood stabilization limits water absorption into the wood structure, thereby limiting the dimensional changes which arise from moisture exposure. Wood stabilization is a subset of wood preservation processes specifically used by woodworking enthusiasts to alter the material properties of specific wood species for applications within their craft or trade. Examples of wood items which are commonly stabilized include knife handles, pistol grips, straight razors, game calls and jewelry. One of the most commonly used stabilizing methods utilizes a heat cured polymer known as methyl methacrylate (MMA). [ 2 ] Material properties of stabilized wood varies by specific species and type of stabilization process used, however in softwoods and soft hardwoods , the improvement in strength, hardness and durability can be dramatic. For example, Poplar treated with MMA increased specimen density by 2.2 to 2.6 times [ 2 ] with gains in hardness of approximately twofold (using the Janka Hardness Test ).
https://en.wikipedia.org/wiki/Wood_stabilization
Wood warping is a deviation from flatness in timber as a result of internal residual stress caused by uneven shrinkage. Warping primarily occurs due to uneven expansion or contraction caused by changes in moisture content. Warping can occur in wood considered "dry" (wood can take up and release moisture indefinitely [ 1 ] ) when it takes up moisture unevenly, or when it is allowed to return to its "dry" equilibrium state unevenly, too slowly, or too quickly. Many factors can contribute to wood warp susceptibility: wood species, grain orientation, air flow, sunlight, uneven finishing , temperature, and cutting season. Wood slabs can also become warped as a result of insufficient support from underlying shelf hardware (commonly referred to as sagging or bowing). [ 2 ] The types of wood warping include: Wood warping costs the wood industry in the U.S. millions of dollars per year. [ citation needed ] Straight wood boards that leave a cutting facility sometimes arrive at the store yard warped. Although wood warping has been studied for years, the warping control model for manufacturing composite wood hasn't been updated for about 40 years. [ citation needed ] Zhiyong Cai , researcher at Texas A&M University , has researched wood warping and was working on a computer software program in 2003 to help manufacturers make changes in the manufacturing process so that wood doesn't arrive at its destination warped after it leaves the mill or factory. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Wood_warping
Wooded meadows (also named wood-meadows , park meadows , etc.) are ecosystems in temperate forest regions. They are sparse natural stands with a regularly mowed herbaceous layer . While frequent throughout Europe during the Medieval period and before, wooded meadows have largely disappeared. Wooded meadows originated with the practices of hunter-gatherer communities. They were important in terms of social organization around a natural resource and determined much of the community's interactions with the natural world. [ 1 ] In the early 20th century, wooded meadows were used for fruit cultivation in Sweden ; however, their prevalence has decreased substantially due to changes in land management and a movement toward more intensive types of agroecosystems . [ 2 ] The more typical, calcicolous [ further explanation needed ] wooded meadows are common around the Baltic Sea . [ 3 ] Wooded meadows have high species richness . In some of the current Estonian wooded meadows, world-record species densities have been recorded (up to 76 species of vascular plants per square meter). [ 4 ] This article about environmental habitats is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wooded_meadow
A woodland edge or forest edge is the transition zone ( ecotone ) from an area of woodland or forest to fields or other open spaces. Certain species of plants and animals are adapted to the forest edge, and these species are often more familiar to humans than species only found deeper within forests. A classic example of a forest edge species is the white-tailed deer in North America. [ citation needed ] On topographic maps woods and forests are generally depicted in a soft green colour. Their edges are – like other features – usually determined from aerial photographs , but sometimes also by terrestrial survey . However, they only represent a snapshot in time because almost all woods have a tendency to spread or to gradually fill clearings . In addition, working out the exact edge of the wood or forest may be difficult where it transitions into scrub or bushes or the trees thin out slowly. Differences of opinion here often involved several tens of metres. In addition, many cartographers prefer to show even small islands of trees, while others – depending on the scale of the map – prefer more general , continuous lines to demarcate the forest or woodland edges. For specialised work, aerial photographs or satellite imagery are frequently utilised without having to revise the maps. Cadastral maps cannot show the current situation because for reasons of cost they can only be updated at fairly long intervals and cultural boundaries are not legally binding. On the woodland edge – however it is defined – not only does the flora change, but also the fauna and the soil type . These edge effects mean that many species of animal prefer woodland edges to the heart of the forest, because they have both protection and light – for example, tree pipits and dunnocks . At the woodland edge trees are often different from those inside the wood, as well as hedge vegetation, brambles and low-growing plants. The more gradual the transition from open country to woodland (for example, through intermediate young trees or bushes), the less risk there is that, in stormy weather, wind will blow under the canopy and uproot the outer rows of trees. The structure of the woodland edge and its maintenance is viewed as important in forest management especially during reforestation . Hunters also use the forest edge for the observation and hunting of wildlife , for example, by using tree stands or hides .
https://en.wikipedia.org/wiki/Woodland_edge
The Woodstock of physics was the popular name given by physicists to the marathon session of the American Physical Society ’s meeting on March 18, 1987, which featured 51 presentations of recent discoveries in the science of high-temperature superconductors . Various presenters anticipated that these new materials would soon result in revolutionary technological applications, but in the three subsequent decades, this proved to be overly optimistic. [ 1 ] The name is a reference to the 1969 Woodstock Music and Art Festival . Before a series of breakthroughs in the mid-1980s, most scientists believed that the extremely low temperature requirements of superconductors rendered them impractical for everyday use. However, in June 1986, K. Alex Muller and Georg Bednorz working in IBM Zurich broke the record of critical temperature superconductivity in lanthanum barium copper oxide (LBCO) to 35 K above absolute zero , which had remained unbroken at 23 K for 17 years. Their discovery stimulated a great deal of additional research in high-temperature superconductivity. By March 1987, a flurry of recent research on ceramic superconductors had succeeded in creating ever-higher superconducting temperatures, including the discovery of Maw-Kue Wu and Jim Ashburn at the University of Alabama , who found a critical temperature of 77 K in yttrium barium copper oxide (YBCO). This result was followed by Paul C. W. Chu at the University of Houston 's of a superconductor that operated at 93 K (−180 °C; −292 °F) – a temperature that could be achieved by cooling with liquid nitrogen . The scientific community was abuzz with excitement. The discoveries were so recent that no papers on them had been submitted by the deadline. However, the Society added a last-minute session to their annual meeting to discuss the new research. [ 2 ] The session was chaired by physicist M. Brian Maple , a superconductor researcher himself, who was one of the meeting's organizers. [ 3 ] It was scheduled to start at 7:30 pm in the Sutton ballroom of the New York Hilton , but excited scientists started lining up at 5:30. [ 4 ] Key researchers such as Chu and Müller were given 10 minutes to describe their research; other physicists were given five minutes. Nearly 2,000 scientists tried to squeeze into the ballroom. Those who could not find a seat filled the aisles or watched outside the room on television monitors. The session ended at 3:15 am, but many lingered until dawn to discuss the presentations. The meeting caused a surge in mainstream media interest in superconductors, and laboratories around the world raced to pursue breakthroughs in the field. In October of the same year, Bednorz and Muller were awarded the Nobel Prize in Physics "for their important break-through in the discovery of superconductivity in ceramic materials", [ 5 ] setting a record for the shortest time between the discovery and the prize award for any scientific Nobel Prize category. [ 6 ] By the following year (1988) two new families of copper-oxide superconductors – the bismuth based or so-called BSCCO and the thallium based or TBCCO materials – had been discovered. Both of these have superconducting transitions above 110 K (−163 °C; −262 °F). So in the follow-up March APS meeting at New Orleans a special evening session called Woodstock of Physics-II was hastily organized to highlight the synthesis and properties of these new, first-ever 'triple digit superconductors'. [ 7 ] The format of the session was the same as in New York. Some of the panelists were repeats from the original "Woodstock" session. Additional researchers including Allen M. Hermann (at that time at the University of Arkansas ), the co-discoverer of the thallium system, and Laura H. Greene (then with AT&T Labs ) were panelists. The 1988 session was chaired by Timir Datta from the University of South Carolina . On March 5, 2007, many of the original participants reconvened in Denver to recognize and review the session on its 20-year anniversary; [ 2 ] the "reunion" was again chaired by Maple. [ 8 ]
https://en.wikipedia.org/wiki/Woodstock_of_physics
The Woods–Saxon potential is a mean field potential for the nucleons ( protons and neutrons ) inside the atomic nucleus , which is used to describe approximately the forces applied on each nucleon , in the nuclear shell model for the structure of the nucleus. The potential is named after Roger D. Woods and David S. Saxon . The form of the potential, in terms of the distance r from the center of nucleus, is: V ( r ) = − V 0 1 + exp ⁡ ( r − R a ) {\displaystyle V(r)=-{\frac {V_{0}}{1+\exp({r-R \over a})}}} where V 0 (having dimension of energy) represents the potential well depth, a is a length representing the "surface thickness" of the nucleus, and R = r 0 A 1 / 3 {\displaystyle R=r_{0}A^{1/3}} is the nuclear radius where r 0 = 1.25 fm and A is the mass number . Typical values for the parameters are: V 0 ≈ 50 MeV , a ≈ 0.5 fm . There are numerous optimized parameter sets available for different atomic nuclei. [ 1 ] [ 2 ] [ 3 ] For large atomic number A this potential is similar to a potential well . It has the following desired properties The Schrödinger equation of this potential can be solved analytically, by transforming it into a hypergeometric differential equation. The radial part of the wavefunction solution is given by u ( r ) = 1 r y ν ( 1 − y ) μ 2 F 1 ( μ + ν , μ + ν + 1 ; 2 ν + 1 ; y ) {\displaystyle u(r)={\frac {1}{r}}y^{\nu }(1-y)^{\mu }{}_{2}F_{1}(\mu +\nu ,\mu +\nu +1;2\nu +1;y)} where y = 1 1 + exp ⁡ ( r − R a ) {\displaystyle y={\dfrac {1}{1+\exp \left({\frac {r-R}{a}}\right)}}} , μ = i γ 2 − ν 2 {\displaystyle \mu =i{\sqrt {\gamma ^{2}-\nu ^{2}}}} , 2 m E ℏ 2 = − ν 2 {\displaystyle {\dfrac {2mE}{\hbar ^{2}}}=-\nu ^{2}} , ν < 0 {\displaystyle \nu <0} and 2 m V 0 ℏ 2 a 2 = γ 2 {\displaystyle {\dfrac {2mV_{0}}{\hbar ^{2}}}a^{2}=\gamma ^{2}} . [ 4 ] Here 2 F 1 ( a , b ; c ; z ) = ∑ n = 0 ∞ ( a ) n ( b ) n ( c ) n z n n ! {\displaystyle {}_{2}F_{1}(a,b;c;z)=\sum _{n=0}^{\infty }{\frac {(a)_{n}(b)_{n}}{(c)_{n}}}{\frac {z^{n}}{n!}}} is the hypergeometric function . It is also possible to analytically solve the eigenvalue problem of the Schrödinger equation with the WS potential plus a finite number of the Dirac delta functions. [ 5 ] It is also possible to give analytic formulas of the Fourier transformation [ 6 ] of the Woods-Saxon potential which makes it possible to work in the momentum space as well. This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Woods–Saxon_potential
Woodward's rules , named after Robert Burns Woodward and also known as Woodward–Fieser rules (for Louis Fieser ) are several sets of empirically derived rules which attempt to predict the wavelength of the absorption maximum (λ max ) in an ultraviolet–visible spectrum of a given compound . Inputs used in the calculation are the type of chromophores present, the auxochromes (substituents on the chromophores, and solvent . [ 1 ] [ 2 ] Examples are conjugated carbonyl compounds, [ 3 ] [ 4 ] [ 5 ] conjugated dienes , [ 3 ] [ 6 ] and polyenes . [ 3 ] [ 5 ] One set of Woodward–Fieser rules for dienes is outlined in table 1. A diene is either homoannular with both double bonds contained in one ring or heteroannular with two double bonds distributed between two rings. With the aid of these rules the UV absorption maximum can be predicted, for example in these two compounds: [ 8 ] In the compound on the left, the base value is 214 nm (a heteroannular diene). This diene group has 4 alkyl substituents (labeled 1,2,3,4) and the double bond in one ring is exocyclic to the other (adding 5 nm for an exocyclic double bond). In the compound on the right, the diene is homoannular with 4 alkyl substituents. Both double bonds in the central B ring are exocyclic with respect to rings A and C. For polyenes having more than 4 conjugated double bonds one must use Fieser–Kuhn rules . [ 3 ]
https://en.wikipedia.org/wiki/Woodward's_rules
The Woodward cis-hydroxylation (also known as the Woodward reaction ) is the chemical reaction of alkenes with iodine and silver acetate in wet acetic acid to form cis-diols. [ 1 ] [ 2 ] (conversion of olefin into cis-diol) The reaction is named after its discoverer, Robert Burns Woodward . This reaction has found application in steroid synthesis . [ 3 ] The reaction of the iodine with the alkene is promoted by the silver acetate, thus forming an iodinium ion ( 3 ). The iodinium ion is opened via S N 2 reaction by acetic acid (or silver acetate) to give the first intermediate, the iodo-acetate ( 4 ). Through anchimeric assistance , the iodine is displaced via another S N 2 reaction to give an oxonium ion ( 5 ), which is subsequently hydrolyzed to the give the mono-ester ( 6 ).
https://en.wikipedia.org/wiki/Woodward_cis-hydroxylation
The Woodward–Hoffmann rules (or the pericyclic selection rules ) [ 1 ] are a set of rules devised by Robert Burns Woodward and Roald Hoffmann to rationalize or predict certain aspects of the stereochemistry and activation energy of pericyclic reactions , an important class of reactions in organic chemistry . The rules originate in certain symmetries of the molecule's orbital structure that any molecular Hamiltonian conserves . Consequently, any symmetry-violating reaction must couple extensively to the environment ; this imposes an energy barrier on its occurrence, and such reactions are called symmetry-forbidden . Their opposites are symmetry-allowed . Although the symmetry-imposed barrier is often formidable (up to ca. 5 eV or 480 kJ/mol in the case of a forbidden [2+2] cycloaddition), the prohibition is not absolute, and symmetry-forbidden reactions can still take place if other factors (e.g. strain release) favor the reaction. Likewise, a symmetry-allowed reaction may be preempted by an insurmountable energetic barrier resulting from factors unrelated to orbital symmetry. All known cases only violate the rules superficially; instead, different parts of the mechanism become asynchronous , and each step conforms to the rules. A pericyclic reaction is an organic reaction that proceeds via a single concerted and cyclic transition state , the geometry of which allows for the continuous overlap of a cycle of (π and/or σ) orbitals . The terms conrotatory and disrotatory describe the relative sense of bond rotation involved in electrocyclic ring-opening and -closing reactions. In a disrotatory process, the breaking or forming bond's two ends rotate in opposing directions (one clockwise, one counterclockwise); in a conrotatory process, they rotate in the same direction (both clockwise or both counterclockwise). Eventually, it was recognized that thermally-promoted pericyclic reactions in general obey a single set of generalized selection rules, depending on the electron count and topology of the orbital interactions. The key concept of orbital topology or faciality was introduced to unify several classes of pericyclic reactions under a single conceptual framework. In short, a set of contiguous atoms and their associated orbitals that react as one unit in a pericyclic reaction is known as a component , and each component is said to be antarafacial or suprafacial depending on whether the orbital lobes that interact during the reaction are on the opposite or same side of the nodal plane, respectively. (The older terms conrotatory and disrotatory, which are applicable to electrocyclic ring opening and closing only, are subsumed by the terms antarafacial and suprafacial, respectively, under this more general classification system.) Woodward and Hoffmann developed the pericyclic selection rules after performing extensive orbital-overlap calculations. At the time, Woodward wanted to know whether certain electrocyclic reactions might help synthesize vitamin B 12 . Chemists knew that such reactions exhibited striking stereospecificity , but could not predict which stereoisomer a reaction might select. In 1965, Woodward–Hoffmann realized that a simple set of rules explained the observed stereospecificity at the ends of open-chain conjugated polyenes when heated or irradiated. In their original publication, [ 2 ] they summarized the experimental evidence and molecular orbital analysis as follows: In 1969, they would use correlation diagrams to state a generalized pericyclic selection rule equivalent to that now attached to their name: a pericyclic reaction is allowed if the sum of the number of suprafacial 4 q + 2 components and number of antarafacial 4 r components is odd. . In the intervening four years, Howard Zimmerman [ 3 ] [ 4 ] and Michael J. S. Dewar [ 5 ] [ 6 ] proposed an equally general conceptual framework: the Möbius-Hückel concept , or aromatic transition state theory . In the Dewar-Zimmerman approach the orbital overlap topology (Hückel or Möbius) and electron count (4 n + 2 or 4 n ) results in either an aromatic or antiaromatic transition state. Meanwhile, Kenichi Fukui [ 7 ] [ 8 ] analyzed the frontier orbitals of such systems. A process in which the HOMO-LUMO interaction is constructive (results in a net bonding interaction) is favorable and considered symmetry-allowed, while a process in which the HOMO-LUMO interaction is non-constructive (results in bonding and antibonding interactions that cancel) is disfavorable and considered symmetry-forbidden. Though conceptually distinct, aromatic transition state theory (Zimmerman and Dewar), frontier molecular orbital theory (Fukui), and orbital symmetry conservation (Woodward and Hoffmann) all make identical predictions. The Woodward–Hoffmann rules exemplify molecular orbital theory 's power, [ 9 ] and indeed helped demonstrate that useful chemical results could arise from orbital analysis. The discovery would earn Hoffmann and Fukui the 1981 Nobel Prize in Chemistry . [ 10 ] By that time, Woodward had died, and so was ineligible for the prize. The interconversion of model cyclobutene and butadiene derivatives under thermal (heating) and photochemical ( Ultraviolet irradiation) conditions is illustrative. The Woodward–Hoffmann rules apply to either direction of a pericyclic process. Due to the inherent ring strain of cyclobutene derivatives, the equilibrium between the cyclobutene and the 1,3-butadiene lies far to the right. Hence, under thermal conditions, the ring opening of the cyclobutene to the 1,3-butadiene is strongly favored by thermodynamics. On the other hand, under irradiation by ultraviolet light, a photostationary state is reached, a composition which depends on both absorbance and quantum yield of the forward and reverse reactions at a particular wavelength. Due to the different degrees of conjugation of 1,3-butadienes and cyclobutenes, only the 1,3-butadiene will have a significant absorbance at higher wavelengths, assuming the absence of other chromophores. Hence, irradiation of the 1,3-butadiene at such a wavelength can result in high conversion to the cyclobutene. Thermolysis of trans -1,2,3,4-tetramethyl-1-cyclobutene ( 1 ) afforded only one geometric isomer, ( E , E )-3,4-dimethyl-2,4-hexadiene ( 2 ); the ( Z , Z ) and the ( E , Z ) geometric isomers were not detected in the product mixture. Similarly, thermolysis of cis -1,2,3,4-tetramethyl-1-cyclobutene ( 3 ) afforded only ( E , Z ) isomer 4 . [ 11 ] In both ring opening reactions, the carbons on the ends of the breaking σ-bond rotate in the same direction. [ 12 ] On the other hand, the opposite stereochemical course was followed under photochemical activation: When the related compound ( E , E )-2,4-hexadiene ( 5 ) was exposed to light, cis -3,4-dimethyl-1-cyclobutene ( 6 ) was formed exclusively as a result of electrocyclic ring closure. [ 13 ] This requires the ends of the π-system to rotate in opposite directions to form the new σ-bond. Thermolysis of 6 follows the same stereochemical course as 3 : electrocyclic ring opening leads to the formation of ( E , Z )-2,4-hexadiene ( 7 ) and not 5 . [ 14 ] The Woodward-Hoffmann rules explain these results through orbital overlap: In the case of a photochemically driven electrocyclic ring-closure of buta-1,3-diene, electronic promotion causes Ψ 3 {\displaystyle \Psi _{3}} to become the HOMO and the reaction mechanism must be disrotatory. Conversely in the electrocyclic ring-closure of the substituted hexa-1,3,5-triene pictured below, the reaction proceeds through a disrotatory mechanism. The Woodward–Hoffmann rules can be stated succinctly as a single sentence: [ 15 ] Generalized pericyclic selection rule. A ground-state pericyclic process involving N electron pairs and A antarafacial components is symmetry-allowed if and only if N + A is odd. A ground-state pericyclic process is brought about by addition of thermal energy (i.e., heating the system, symbolized by Δ ). In contrast, an excited-state pericyclic process takes place if a reactant is promoted to an electronically excited state by activation with ultraviolet light (i.e., irradiating the system, symbolized by h ν ). It is important to recognize, however, that the operative mechanism of a formally pericyclic reaction taking place under photochemical irradiation is generally not as simple or clearcut as this dichotomy suggests. Several modes of electronic excitation are usually possible, and electronically excited molecules may undergo intersystem crossing , radiationless decay, or relax to an unfavorable equilibrium geometry before the excited-state pericyclic process can take place. Thus, many apparent pericyclic reactions that take place under irradiation are actually thought to be stepwise processes involving diradical intermediates. Nevertheless, it is frequently observed that the pericyclic selection rules become reversed when switching from thermal to photochemical activation. This can be rationalized by considering the correlation of the first electronic excited states of the reactants and products. Although more of a useful heuristic than a rule, a corresponding generalized selection principle for photochemical pericyclic reactions can be stated: A pericyclic process involving N electron pairs and A antarafacial components is often favored under photochemical conditions if N + A is even. Pericyclic reactions involving an odd number of electrons are also known. With respect to application of the generalized pericyclic selection rule, these systems can generally be treated as though one more electron were involved. [ 16 ] In the language of aromatic transition state theory, the Woodward–Hoffmann rules can be restated as follows: A pericyclic transition state involving (4 n + 2) electrons with Hückel topology or 4 n electrons with Möbius topology is aromatic and allowed, while a pericyclic transition state involving 4 n -electrons with Hückel topology or (4 n + 2)-electrons with Möbius topology is antiaromatic and forbidden. Longuet-Higgins and E. W. Abrahamson showed that the Woodward–Hoffmann rules can best be derived by examining the correlation diagram of a given reaction. [ 17 ] [ 16 ] [ 18 ] [ 19 ] A symmetry element is a point of reference (usually a plane or a line) about which an object is symmetric with respect to a symmetry operation. If a symmetry element is present throughout the reaction mechanism (reactant, transition state, and product), it is called a conserved symmetry element. Then, throughout the reaction, the symmetry of molecular orbitals with respect to this element must be conserved. That is, molecular orbitals that are symmetric with respect to the symmetry element in the starting material must be correlated to (transform into) orbitals symmetric with respect to that element in the product. Conversely, the same statement holds for antisymmetry with respect to a conserved symmetry element. A molecular orbital correlation diagram correlates molecular orbitals of the starting materials and the product based upon conservation of symmetry. From a molecular orbital correlation diagram one can construct an electronic state correlation diagram that correlates electronic states (i.e. ground state, and excited states) of the reactants with electronic states of the products. Correlation diagrams can then be used to predict the height of transition state barriers. [ 20 ] Although orbital "symmetry" is used as a tool for sketching orbital and state correlation diagrams, the absolute presence or absence of a symmetry element is not critical for the determination of whether a reaction is allowed or forbidden. That is, the introduction of a simple substituent that formally disrupts a symmetry plane or axis (e.g., a methyl group) does not generally affect the assessment of whether a reaction is allowed or forbidden. Instead, the symmetry present in an unsubstituted analog is used to simplify the construction of orbital correlation diagrams and avoid the need to perform calculations. [ 21 ] Only the phase relationships between orbitals are important when judging whether a reaction is "symmetry"-allowed or forbidden. Moreover, orbital correlations can still be made, even if there are no conserved symmetry elements (e.g., 1,5-sigmatropic shifts and ene reactions). For this reason, the Woodward–Hoffmann, Fukui, and Dewar–Zimmerman analyses are equally broad in their applicability, though a certain approach may be easier or more intuitive to apply than another, depending on the reaction one wishes to analyze. Considering the electrocyclic ring closure of the substituted 1,3-butadiene, the reaction can proceed through either a conrotatory or a disrotatory reaction mechanism. As shown to the left, in the conrotatory transition state there is a C 2 axis of symmetry and in the disrotatory transition state there is a σ mirror plane of symmetry. In order to correlate orbitals of the starting material and product, one must determine whether the molecular orbitals are symmetric or antisymmetric with respect to these symmetry elements. The π-system molecular orbitals of butadiene are shown to the right along with the symmetry element with which they are symmetric. They are antisymmetric with respect to the other. For example, Ψ 2 of 1,3-butadiene is symmetric with respect to 180 o rotation about the C 2 axis, and antisymmetric with respect to reflection in the mirror plane. Ψ 1 and Ψ 3 are symmetric with respect to the mirror plane as the sign of the p-orbital lobes is preserved under the symmetry transformation. Similarly, Ψ 1 and Ψ 3 are antisymmetric with respect to the C 2 axis as the rotation inverts the sign of the p-orbital lobes uniformly. Conversely Ψ 2 and Ψ 4 are symmetric with respect to the C 2 axis and antisymmetric with respect to the σ mirror plane. The same analysis can be carried out for the molecular orbitals of cyclobutene. The result of both symmetry operations on each of the MOs is shown to the left. As the σ and σ * orbitals lie entirely in the plane containing C 2 perpendicular to σ, they are uniformly symmetric and antisymmetric (respectively) to both symmetry elements. On the other hand, π is symmetric with respect to reflection and antisymmetric with respect to rotation, while π * is antisymmetric with respect to reflection and symmetric with respect to rotation. Correlation lines are drawn to connect molecular orbitals in the starting material and the product that have the same symmetry with respect to the conserved symmetry element. In the case of the conrotatory 4 electron electrocyclic ring closure of 1,3-butadiene, the lowest molecular orbital Ψ 1 is asymmetric (A) with respect to the C 2 axis. So this molecular orbital is correlated with the π orbital of cyclobutene, the lowest energy orbital that is also (A) with respect to the C 2 axis. Similarly, Ψ 2 , which is symmetric (S) with respect to the C 2 axis, is correlated with σ of cyclobutene. The final two correlations are between the antisymmetric (A) molecular orbitals Ψ 3 and σ * , and the symmetric (S) molecular orbitals Ψ 4 and π * . [ 16 ] Similarly, there exists a correlation diagram for a disrotatory mechanism. In this mechanism, the symmetry element that persists throughout the entire mechanism is the σ mirror plane of reflection. Here the lowest energy MO Ψ 1 of 1,3-butadiene is symmetric with respect to the reflection plane, and as such correlates with the symmetric σ MO of cyclobutene. Similarly the higher energy pair of symmetric molecular orbitals Ψ 3 and π correlate. As for the asymmetric molecular orbitals, the lower energy pair Ψ 2 and π * form a correlation pair, as do Ψ 4 and σ * . [ 16 ] Evaluating the two mechanisms, the conrotatory mechanism is predicted to have a lower barrier because it transforms the electrons from ground-state orbitals of the reactants (Ψ 1 and Ψ 2 ) into ground-state orbitals of the product (σ and π). Conversely, the disrotatory mechanism forces the conversion of the Ψ 1 orbital into the σ orbital, and the Ψ 2 orbital into the π * orbital. Thus the two electrons in the ground-state Ψ 2 orbital are transferred to an excited antibonding orbital, creating a doubly excited electronic state of the cyclobutene. This would lead to a significantly higher transition state barrier to reaction. [ 16 ] However, as reactions do not take place between disjointed molecular orbitals, but electronic states, the final analysis involves state correlation diagrams. A state correlation diagram correlates the overall symmetry of electronic states in the starting material and product. The ground state of 1,3-butadiene, as shown above, has 2 electrons in Ψ 1 and 2 electrons in Ψ 2 , so it is represented as Ψ 1 2 Ψ 2 2 . The overall symmetry of the state is the product of the symmetries of each filled orbital with multiplicity for doubly populated orbitals. Thus, as Ψ 1 is asymmetric with respect to the C 2 axis, and Ψ 2 is symmetric, the total state is represented by A 2 S 2 . To see why this particular product is mathematically overall S, that S can be represented as (+1) and A as (−1). This derives from the fact that signs of the lobes of the p-orbitals are multiplied by (+1) if they are symmetric with respect to a symmetry transformation (i.e. unaltered) and multiplied by (−1) if they are antisymmetric with respect to a symmetry transformation (i.e. inverted). Thus A 2 S 2 =(−1) 2 (+1) 2 =+1=S. The first excited state (ES-1) is formed from promoting an electron from the HOMO to the LUMO , and thus is represented as Ψ 1 2 Ψ 2 Ψ 3 . As Ψ 1 is A, Ψ 2 is S, and Ψ 3 is A, the symmetry of this state is given by A 2 SA=A.Now considering the electronic states of the product, cyclobutene, the ground-state is given by σ 2 π 2 , which has symmetry S 2 A 2 =S. The first excited state (ES-1') is again formed from a promotion of an electron from the HOMO to the LUMO , so in this case it is represented as σ 2 ππ * . The symmetry of this state is S 2 AS=A. The ground state Ψ 1 2 Ψ 2 2 of 1,3-butadiene correlates with the ground state σ 2 π 2 of cyclobutene as demonstrated in the MO correlation diagram above. Ψ 1 correlates with π and Ψ 2 correlates with σ. Thus the orbitals making up Ψ 1 2 Ψ 2 2 must transform into the orbitals making up σ 2 π 2 under a conrotatory mechanism. However, the state ES-1 does not correlate with the state ES-1' as the molecular orbitals do not transform into each other under the symmetry-requirement seen in the molecular orbital correlation diagram. Instead as Ψ 1 correlates with π, Ψ 2 correlates with σ, and Ψ 3 correlates with σ * , the state Ψ 1 2 Ψ 2 Ψ 3 attempts to transform into π 2 σσ * , which is a different excited state. So ES-1 attempts to correlate with ES-2'=σπ 2 σ * , which is higher in energy than Es-1'. Similarly ES-1'=σ 2 ππ * attempts to correlate with ES-2=Ψ 1 Ψ 2 2 Ψ 4 . These correlations can not actually take place due to the quantum-mechanical rule known as the avoided crossing rule . This says that energetic configurations of the same symmetry can not cross on an energy level correlation diagram. In short, this is caused by mixing of states of the same symmetry when brought close enough in energy. So instead a high energetic barrier is formed between a forced transformation of ES-1 into ES-1'. In the diagram below the symmetry-preferred correlations are shown in dashed lines and the bold curved lines indicate the actual correlation with the high energetic barrier. [ 16 ] [ 20 ] The same analysis can be applied to the disrotatory mechanism to create the following state correlation diagram. [ 16 ] [ 20 ] Thus if the molecule is in the ground state it will proceed through the conrotatory mechanism (i.e. under thermal control) to avoid an electronic barrier. However, if the molecule is in the first excited state (i.e. under photochemical control), the electronic barrier is present in the conrotatory mechanism and the reaction will proceed through the disrotatory mechanism. These are not completely distinct as both the conrotatory and disrotatory mechanisms lie on the same potential surface. Thus a more correct statement is that as a ground state molecule explores the potential energy surface, it is more likely to achieve the activation barrier to undergo a conrotatory mechanism. [ 20 ] The Woodward–Hoffmann rules can also explain bimolecular cycloaddition reactions through correlation diagrams. [ 22 ] A [ π p + π q ] cycloaddition brings together two components, one with p π-electrons, and the other with q π-electrons. Cycloaddition reactions are further characterized as suprafacial (s) or antarafacial (a) with respect to each of the π components. ( See below "General formulation" for a detailed description of the generalization of WH notation to all pericyclic processes .) For ordinary alkenes, [2+2] cycloadditions are only observed under photochemical activation. The rationale for the non-observation of thermal [2+2] cycloadditions begins with the analysis of the four possible stereochemical consequences for the [2+2] cycloaddition: [ π 2 s + π 2 s ], [ π 2 a + π 2 s ], [ π 2 s + π 2 a ], [ π 2 a + π 2 a ]. The geometrically most plausible [ π 2 s + π 2 s ] mode is forbidden under thermal conditions, while the [ π 2 a + π 2 s ], [ π 2 s + π 2 a ] approaches are allowed from the point of view of symmetry but are rare due to an unfavorable strain and steric profile. [ 16 ] Considering the [ π 2 s + π 2 s ] cycloaddition. This mechanism leads to a retention of stereochemistry in the product, as illustrated to the right. Two symmetry elements are present in the starting materials, transition state, and product: σ 1 and σ 2 . σ 1 is the mirror plane between the components perpendicular to the p-orbitals ; σ 2 splits the molecules in half perpendicular to the σ-bonds . [ 22 ] These are both local-symmetry elements in the case that the components are not identical. To determine symmetry and asymmetry with respect to σ 1 and σ 2 , the starting material molecular orbitals must be considered in tandem. The figure to the right shows the molecular orbital correlation diagram for the [ π 2 s + π 2 s ] cycloaddition. The two π and π * molecular orbitals of the starting materials are characterized by their symmetry with respect to first σ 1 and then σ 2 . Similarly, the σ and σ * molecular orbitals of the product are characterized by their symmetry. In the correlation diagram, molecular orbitals transformations over the course of the reaction must conserve the symmetry of the molecular orbitals. Thus π SS correlates with σ SS , π AS correlates with σ * AS , π * SA correlates with σ SA , and finally π * AA correlates with σ * AA . Due to conservation of orbital symmetry, the bonding orbital π AS is forced to correlate with the antibonding orbital σ * AS . Thus a high barrier is predicted. [ 16 ] [ 20 ] [ 22 ] This is made precise in the state correlation diagram below. [ 16 ] [ 20 ] The ground state in the starting materials is the electronic state where π SS and π AS are both doubly populated – i.e. the state (SS) 2 (AS) 2 . As such, this state attempts to correlate with the electronic state in the product where both σ SS and σ * AS are doubly populated – i.e. the state (SS) 2 (AS) 2 . However, this state is neither the ground state (SS) 2 (SA) 2 of cyclobutane, nor the first excited state ES-1'=(SS) 2 (SA)(AS), where an electron is promoted from the HOMO to the LUMO. A [4+2] cycloaddition is exemplified by the Diels-Alder reaction. The simplest case is the reaction of 1,3-butadiene with ethylene to form cyclohexene . One symmetry element is conserved in this transformation – the mirror plane through the center of the reactants as shown to the left. The molecular orbitals of the reactants are the set {Ψ 1 , Ψ 2 , Ψ 3 , Ψ 4 } of molecular orbitals of 1,3-butadiene shown above, along with π and π * of ethylene. Ψ 1 is symmetric, Ψ 2 is antisymmetric, Ψ 3 is symmetric, and Ψ 4 is antisymmetric with respect to the mirror plane. Similarly π is symmetric and π * is antisymmetric with respect to the mirror plane. The molecular orbitals of the product are the symmetric and antisymmetric combinations of the two newly formed σ and σ * bonds and the π and π * bonds as shown below. Correlating the pairs of orbitals in the starting materials and product of the same symmetry and increasing energy gives the correlation diagram to the right. As this transforms the ground state bonding molecular orbitals of the starting materials into the ground state bonding orbitals of the product in a symmetry conservative manner this is predicted to not have the great energetic barrier present in the ground state [2+2] reaction above. To make the analysis precise, one can construct the state correlation diagram for the general [4+2]-cycloaddition. [ 20 ] As before, the ground state is the electronic state depicted in the molecular orbital correlation diagram to the right. This can be described as Ψ 1 2 π 2 Ψ 2 2 , of total symmetry S 2 S 2 A 2 =S. This correlates with the ground state of the cyclohexene σ S σ A π 2 which is also S 2 S 2 A 2 =S. As such this ground state reaction is not predicted to have a high symmetry-imposed barrier. One can also construct the excited-state correlations as is done above. Here, there is a high energetic barrier to a photo-induced Diels-Alder reaction under a suprafacial-suprafacial bond topology due to the avoided crossing shown below. The symmetry-imposed barrier heights of group transfer reactions can also be analyzed using correlation diagrams. A model reaction is the transfer of a pair of hydrogen atoms from ethane to perdeuterioethylene shown to the right. The only conserved symmetry element in this reaction is the mirror plane through the center of the molecules as shown to the left. The molecular orbitals of the system are constructed as symmetric and antisymmetric combinations of σ and σ * C–H bonds in ethane and π and π * bonds in the deutero-substituted ethene. Thus the lowest energy MO is the symmetric sum of the two C–H σ-bond (σ S ), followed by the antisymmetric sum (σ A ). The two highest energy MOs are formed from linear combinations of the σ CH antibonds – highest is the antisymmetric σ * A , preceded by the symmetric σ * A at a slightly lower energy. In the middle of the energetic scale are the two remaining MOs that are the π CC and π * CC of ethene. The full molecular orbital correlation diagram is constructed in by matching pairs of symmetric and asymmetric MOs of increasing total energy, as explained above. As can be seen in the adjacent diagram, as the bonding orbitals of the reactants exactly correlate with the bonding orbitals of the products, this reaction is not predicted to have a high electronic symmetry-imposed barrier. [ 16 ] [ 20 ] Using correlation diagrams one can derive selection rules for the following generalized classes of pericyclic reactions. Each of these particular classes is further generalized in the generalized Woodward–Hoffmann rules. The more inclusive bond topology descriptors antarafacial and suprafacial subsume the terms conrotatory and disrotatory, respectively. Antarafacial refers to bond making or breaking through the opposite face of a π system, p orbital, or σ bond, while suprafacial refers to the process occurring through the same face. A suprafacial transformation at a chiral center preserves stereochemistry, whereas an antarafacial transformation reverses stereochemistry. The selection rule of electrocyclization reactions is given in the original statement of the Woodward–Hoffmann rules. If a generalized electrocyclic ring closure occurs in a polyene of 4 n π-electrons, then it is conrotatory under thermal conditions and disrotatory under photochemical conditions. Conversely in a polyene of 4 n + 2 π-electrons, an electrocyclic ring closure is disrotatory under thermal conditions and conrotatory under photochemical conditions. This result can either be derived via an FMO analysis based upon the sign of p orbital lobes of the HOMO of the polyene or with correlation diagrams. Taking first the first possibility, in the ground state, if a polyene has 4 n electrons, the outer p-orbitals of the HOMO that form the σ bond in the electrocyclized product are of opposite signs. Thus a constructive overlap is only produced under a conrotatory or antarafacial process. Conversely for a polyene with 4 n + 2 electrons, the outer p-orbitals of the ground state HOMO are of the same sign. Thus constructive orbital overlap occurs with a disrotatory or suprafacical process. [ 2 ] Additionally, the correlation diagram for any 4 n electrocyclic reaction will resemble the diagram for the 4 electron cyclization of 1,3-butadiene, while the correlation diagram any 4 n + 2 electron electrocyclic reaction will resemble the correlation diagram for the 6 electron cyclization of 1,3,5-hexatriene. [ 16 ] This is summarized in the following table: A general sigmatropic rearrangement can be classified as order [ i , j ], meaning that a σ bond originally between atoms denoted 1 and 1', adjacent to one or more π systems, is shifted to between atoms i and j . Thus it migrates ( i − 1), ( j − 1) atoms away from its original position. A formal symmetry analysis via correlation diagrams is of no use in the study of sigmatropic rearrangements as there are, in general, only symmetry elements present in the transition state. Except in special cases (e.g. [3,3]-rearrangements), there are no symmetry elements that are conserved as the reaction coordinate is traversed. [ 16 ] [ 20 ] Nevertheless, orbital correlations between starting materials and products can still be analyzed, and correlations of starting material orbitals with high energy product orbitals will, as usual, result in "symmetry-forbidden" processes. However, an FMO based approach (or the Dewar-Zimmerman analysis) is more straightforward to apply. One of the most prevalent classes of sigmatropic shifts is classified as [1, j ], where j is odd. That means one terminus of the σ-bond migrates ( j − 1) bonds away across a π-system while the other terminus does not migrate. It is a reaction involving j + 1 electrons: j − 1 from the π-system and 2 from σ-bond. Using FMO analysis, [1, j ]-sigmatropic rearrangements are allowed if the transition state has constructive overlap between the migrating group and the accepting p orbital of the HOMO. In [1, j ]-sigmatropic rearrangements if j + 1 = 4 n , then supra/antara is thermally allowed, and if j + 1 = 4 n + 2, then supra/supra or antara/antara is thermally allowed. [ 20 ] The other prevalent class of sigmatropic rearrangements are [3,3], notably the Cope and Claisen rearrangements. Here, the constructive interactions must be between the HOMOs of the two allyl radical fragments in the transition state. The ground state HOMO Ψ 2 of the allyl fragment is shown below. As the terminal p-orbitals are of opposite sign, this reaction can either take place in a supra/supra topology, or an antara/antara topology. [ 20 ] The selection rules for an [ i , j ]-sigmatropic rearrangement are as follows: This is summarized in the following table: A general [ p + q ]-cycloaddition is a concerted addition reaction between two components, one with p π-electrons, and one with q π-electrons. This reaction is symmetry allowed under the following conditions: [ 16 ] This is summarized in the following table: A general double group transfer reaction which is synchronous can be represented as an interaction between a component with p π electrons and a component with q π electrons as shown. Then the selection rules are the same as for the generalized cycloaddition reactions. [ 16 ] That is This is summarized in the following table: The case of q = 0 corresponds to the thermal elimination of the "transferred" R groups. There is evidence that the pyrolytic eliminations of dihydrogen and ethane from 1,4-cyclohexadiene and 3,3,6,6-tetramethyl-1,4-cyclohexadiene, respectively, represent examples of this type of pericyclic process. The ene reaction is often classified as a type of group transfer process, even though it does not involve the transfer of two σ-bonded groups. Rather, only one σ-bond is transferred while a second σ-bond is formed from a broken π-bond. As an all suprafacial process involving 6 electrons, it is symmetry-allowed under thermal conditions. The Woodward-Hoffmann symbol for the ene reaction is [ π 2 s + π 2 s + σ 2 s ] ( see below ). Though the Woodward–Hoffmann rules were first stated in terms of electrocyclic processes, they were eventually generalized to all pericyclic reactions, as the similarity and patterns in the above selection rules should indicate. In the generalized Woodward–Hoffmann rules, everything is characterized in terms of antarafacial and suprafacial bond topologies. The terms conrotatory and disrotatory are sufficient for describing the relative sense of bond rotation in electrocyclic ring closing or opening reactions, as illustrated on the right. However, they are unsuitable for describing the topologies of bond forming and breaking taking place in a general pericyclic reaction. As described in detail below, in the general formulation of the Woodward–Hoffmann rules, the bond rotation terms conrotatory and disrotatory are subsumed by the bond topology (or faciality) terms antarafacial and suprafacial , respectively. These descriptors can be used to characterize the topology of the bond forming and breaking that takes place in any pericyclic process. A component is any part of a molecule or molecules that function as a unit in a pericyclic reaction. A component consists of one or more atoms and any of the following types of associated orbitals: The electron count of a component is the number of electrons in the orbital(s) of the component: The bond topology of a component can be suprafacial and antarafacial: Using this notation, all pericyclic reactions can be assigned a descriptor, consisting of a series of symbols σ/π/ω N s/a , connected by + signs and enclosed in brackets, describing, in order, the type of orbital(s), number of electrons, and bond topology involved for each component. Some illustrative examples follow: Antarafacial and suprafacial are associated with (conrotation or inversion) and (disrotation or retention), respectively. A single descriptor may correspond to two pericyclic processes that are chemically distinct, that a reaction and its microscopic reverse are often described with two different descriptors, and that a single process may have more than a one correct descriptor. One can verify, using the pericyclic selection rule given below, that all of these reactions are allowed processes. Using this notation, Woodward and Hoffmann state in their 1969 review the general formulation for all pericyclic reactions as follows: A ground-state pericyclic change is symmetry-allowed when the total number of (4q+2) s and (4r) a components is odd. [ 16 ] Here, (4 q + 2) s and (4 r ) a refer to suprafacial (4 q + 2)-electron and antarafacial (4 r )-electron components, respectively. Moreover, this criterion should be interpreted as both sufficient (stated above) as well as necessary (not explicitly stated above, see: if and only if ) Alternatively, the general statement can be formulated in terms of the total number of electrons using simple rules of divisibility by a straightforward analysis of two cases. First, consider the case where the total number of electrons is 4 n + 2: where a , b , c , and d are coefficients indicating the number of each type of component. This equation implies that one of, but not both, a or b is odd, for if a and b are both even or both odd, then the sum of the four terms is 0 (mod 4). The generalized statement of the Woodward–Hoffmann rules states that a + d is odd if the reaction is allowed. Now, if a is even, then this implies that d is odd. Since b is odd in this case, the number of antarafacial components, b + d , is even. Likewise, if a is odd, then d is even. Since b even in this case, the number of antarafacial components, b + d , is again even. Thus, regardless of the initial assumption of parity for a and b , the number of antarafacial components is even when the electron count is 4 n + 2. Contrariwise,, b + d is odd. In the case where the total number of electrons is 4 n , similar arguments (omitted here) lead to the conclusion that the number of antarafacial components b + d must be odd in the allowed case and even in the forbidden case. Finally, to complete the argument, and show that this new criterion is truly equivalent to the original criterion, one needs to argue the converse statements as well, namely, that the number of antarafacial components b + d and the electron count (4 n + 2 or 4 n ) implies the parity of a + d that is given by the Woodward–Hoffmann rules (odd for allowed, even for forbidden). Another round of (somewhat tedious) case analyses will easily show this to be the case. The pericyclic selection rule states: A pericyclic process involving 4n+2 or 4n electrons is thermally allowed if and only if the number of antarafacial components involved is even or odd, respectively. In this formulation, the electron count refers to the entire reacting system, rather than to individual components, as enumerated in Woodward and Hoffmann's original statement. In practice, an even or odd number of antarafacial components usually means zero or one antarafacial components, respectively, as transition states involving two or more antarafacial components are typically disfavored by strain. As exceptions, certain intramolecular reactions may be geometrically constrained in such a way that enforces an antarafacial trajectory for multiple components. In addition, in some cases, e.g., the Cope rearrangement, the same (not necessarily strained) transition state geometry can be considered to contain two supra or two antara π components, depending on how one draws the connections between orbital lobes. (This ambiguity is a consequence of the convention that overlap of either both interior or both exterior lobes of a σ component can be considered to be suprafacial.) This alternative formulation makes the equivalence of the Woodward–Hoffmann rules to the Dewar–Zimmerman analysis (see below) clear. An even total number of phase inversions is equivalent to an even number of antarafacial components and corresponds to Hückel topology, requiring 4 n + 2 electrons for aromaticity, while an odd total number of phase inversions is equivalent to an odd number of antarafacial components and corresponds to Möbius topology, requiring 4 n electrons for aromaticity. [ 24 ] To summarize aromatic transition state theory: Thermal pericyclic reactions proceed via (4 n + 2) -electron Hückel or (4 n ) -electron Möbius transition states . As a mnemonic, the above formulation can be further restated as the following: A ground-state pericyclic process involving N electron pairs and A antarafacial components is symmetry-allowed if and only if N + A is odd. The equivalence of the two formulations can also be seen by a simple parity argument without appeal to case analysis. Proposition. The following formulations of the Woodward–Hoffmann rules are equivalent: (A) For a pericyclic reaction, if the sum of the number of suprafacial 4q + 2 components and antarafacial 4r components is odd then it is thermally allowed; otherwise the reaction is thermally forbidden. (B) For a pericyclic reaction, if the total number of antarafacial components of a (4n + 2)-electron reaction is even or the total number of antarafacial components of a 4n-electron reaction is odd then it is thermally allowed; otherwise the reaction is thermally forbidden. Proof of equivalence: Index the components of a k -component pericyclic reaction i = 1 , 2 , … , k {\displaystyle i=1,2,\ldots ,k} and assign component i with Woodward-Hoffmann symbol σ/π/ω N s/a the electron count and topology parity symbol ( n i , p i , i ) {\displaystyle (n_{i},p_{i},i)} according to the following rules: n i = { 0 , N ≡ 0 ( m o d 4 ) 1 , N ≡ 2 ( m o d 4 ) a n d p i = { 0 , i is supra 1 , i is antara . {\displaystyle n_{i}={\begin{cases}0,&N\equiv 0\ (\mathrm {mod} \ 4)\\1,&N\equiv 2\ (\mathrm {mod} \ 4)\end{cases}}\quad \mathrm {and} \quad p_{i}={\begin{cases}0,&i{\text{ is supra}}\\1,&i{\text{ is antara}}\end{cases}}.} We have a mathematically equivalent restatement of (A) : (A') A collection of symbols { ( n i , p i , i ) } {\displaystyle \{(n_{i},p_{i},i)\}} is thermally allowed if and only if the number of symbols with the property n i ≠ p i {\displaystyle n_{i}\neq p_{i}} is odd. Since the total electron count is 4 n + 2 or 4 n precisely when ∑ i n i {\textstyle \sum _{i}n_{i}} (the number of (4 q + 2)-electron components) is odd or even, respectively, while ∑ i p i {\textstyle \sum _{i}p_{i}} gives the number of antarafacial components, we can also restate (B) : (B') A collection of symbols { ( n i , p i , i ) } {\displaystyle \{(n_{i},p_{i},i)\}} is thermally allowed if and only if exactly one of ∑ i n i {\textstyle \sum _{i}n_{i}} or ∑ i p i {\textstyle \sum _{i}p_{i}} is odd. It suffices to show that (A') and (B') are equivalent. Exactly one of ∑ i n i {\textstyle \sum _{i}n_{i}} or ∑ i p i {\textstyle \sum _{i}p_{i}} is odd if and only if ∑ i n i + ∑ i p i = ∑ i n i + p i {\textstyle \sum _{i}n_{i}+\sum _{i}p_{i}=\sum _{i}n_{i}+p_{i}} is odd. If n i = p i {\displaystyle n_{i}=p_{i}} , n i + p i ≡ 0 ( m o d 2 ) {\displaystyle n_{i}+p_{i}\equiv 0\ (\mathrm {mod} \ 2)} holds; hence, omission of symbols with the property n i = p i {\displaystyle n_{i}=p_{i}} from a collection will not change the parity of ∑ i n i + p i {\textstyle \sum _{i}n_{i}+p_{i}} . On the other hand, when n i ≠ p i {\displaystyle n_{i}\neq p_{i}} , we have n i + p i = 1 {\displaystyle n_{i}+p_{i}=1} , but ∑ n i ≠ p i 1 {\textstyle \sum _{n_{i}\neq p_{i}}1} simply enumerates the number of components with the property n i ≠ p i {\displaystyle n_{i}\neq p_{i}} . Therefore, ∑ i n i + p i ≡ ∑ n i ≠ p i n i + p i = ∑ n i ≠ p i 1 = # { ( n i , p i , i ) | n i ≠ p i } ( m o d 2 ) {\displaystyle \sum _{i}n_{i}+p_{i}\equiv \sum _{n_{i}\neq p_{i}}n_{i}+p_{i}=\sum _{n_{i}\neq p_{i}}1=\#\{(n_{i},p_{i},i)\ |\ n_{i}\neq p_{i}\}\ (\mathrm {mod} \ 2)} . Thus, ∑ i n i + p i {\textstyle \sum _{i}n_{i}+p_{i}} and the number of symbols in a collection with the property n i ≠ p i {\displaystyle n_{i}\neq p_{i}} have the same parity. Since formulations (A') and (B') are equivalent, so are (A) and (B) , as claimed. □ To give a concrete example, a hypothetical reaction with the descriptor [ π 6 s + π 4 a + π 2 a ] would be assigned the collection {(1, 0, 1), (0, 1, 2), (1, 1, 3)} in the scheme above. There are two components, (1, 0, 1) and (0, 1, 2), with the property n i ≠ p i {\displaystyle n_{i}\neq p_{i}} , so the reaction is not allowed by (A') . Likewise, ∑ i n i = 2 {\textstyle \sum _{i}n_{i}=2} and ∑ i p i = 2 {\textstyle \sum _{i}p_{i}=2} are both even, so (B') yields the same conclusion (as it must): the reaction is not allowed. This formulation for a 2 component reaction is equivalent to the selection rules for a [ p + q ]-cycloaddition reactions shown in the following table: If the total number of electrons is 4 n + 2, then one is in the bottom row of the table. The reaction is thermally allowed if it is suprafacial with respect to both components or antarafacial with respect to both components. That is to say the number of antarafacial components is even (it is 0 or 2). Similarly if the total number of electrons is 4 n , then one is in the top row of the table. This is thermally allowed if it is suprafacial with respect to one component and antarafacial with respect to the other. Thus the total number of antarafacial components is always odd as it is always 1. The following are some common ground state (i.e. thermal) reaction classes analyzed in light of the generalized Woodward–Hoffmann rules. A [2+2]-cycloaddition is a 4 electron process that brings together two components. Thus, by the above general WH rules, it is only allowed if the reaction is antarafacial with respect to exactly one component. This is the same conclusion reached with correlation diagrams in the section above. A rare but stereochemically unambiguous example of a [ π 2 s + π 2 a ]-cycloaddition is shown on the right. The strain and steric properties of the trans double bond enables this generally kinetically unfavorable process. cis , trans -1,5-Cyclooctadiene is also believed to undergo dimerization via this mode. [ 16 ] Ketenes are a large class of reactants favoring [2 + 2] cycloaddition with olefins. The MO analysis of ketene cycloaddition is rendered complicated and ambiguous by the simultaneous but independent interaction of the orthogonal orbitals of the ketene but may involve a [ π 2 s + π 2 a ] interaction as well. [ 25 ] The synchronous 6π-electron Diels-Alder reaction is a [ π 4 s + π 2 s ]-cycloaddition (i.e. suprafacial with respect to both components), as exemplified by the reaction to the right. Thus as the total number of antarafacial components is 0, which is even, the reaction is symmetry-allowed. [ 16 ] This prediction agrees with experiment as the Diels-Alder reaction is a rather facile pericyclic reaction. A 4 n electron electrocyclic ring opening reaction can be considered to have 2 components – the π-system and the breaking σ-bond. With respect to the π-system, the reaction is suprafacial. However, with a conrotatory mechanism, as shown in the figure above, the reaction is antarafacial with respect to the σ-bond. Conversely with a disrotatory mechanism it is suprafacial with respect to the breaking σ-bond. By the above rules, for a 4 n electron pericyclic reaction of 2 components, there must be one antarafacial component. Thus the reaction must proceed through a conrotatory mechanism. [ 16 ] This agrees with the result derived in the correlation diagrams above. A 4 n + 2 electrocyclic ring opening reaction is also a 2-component pericyclic reaction which is suprafacial with respect to the π-system. Thus, in order for the reaction to be allowed, the number of antarafacial components must be 0, i.e. it must be suprafacial with respect to the breaking σ-bond as well. Thus a disrotatory mechanism is symmetry-allowed. [ 16 ] A [1, j ]-sigmatropic rearrangement is also a two component pericyclic reaction: one component is the π-system, the other component is the migrating group. The simplest case is a [1, j ]-hydride shift across a π-system where j is odd. In this case, as the hydrogen has only a spherically symmetric s orbital, the reaction must be suprafacial with respect to the hydrogen. The total number of electrons involved is ( j + 1) as there are ( j − 1)/2 π-bond plus the σ bond involved in the reaction. If j = 4 n − 1 then it must be antarafacial, and if j = 4 n + 1, then it must be suprafacial. [ 16 ] This agrees with experiment that [1,3]-hydride shifts are generally not observed as the symmetry-allowed antarafacial process is not feasible, but [1,5]-hydride shifts are quite facile. For a [1, j ]-alkyl shift, where the reaction can be antarafacial (i.e. invert stereochemistry) with respect to the carbon center, the same rules apply. If j = 4 n − 1 then the reaction is symmetry-allowed if it is either antarafacial with respect to the π-system, or inverts stereochemistry at the carbon. If j = 4 n + 1 then the reaction is symmetry-allowed if it is suprafacial with respect to the π-system and retains stereochemistry at the carbon center. [ 16 ] On the right is one of the first examples of a [1,3]-sigmatropic shift to be discovered, reported by Berson in 1967. [ 26 ] In order to allow for inversion of configuration, as the σ bond breaks, the C(H)(D) moiety twists around at the transition state, with the hybridization of the carbon approximating sp 2 , so that the remaining unhybridized p orbital maintains overlap with both carbons 1 and 3. The generalized Woodward–Hoffmann rules, first given in 1969, are equivalent to an earlier general approach, the Möbius-Hückel concept of Zimmerman, which was first stated in 1966 and is also known as aromatic transition state theory . [ 3 ] [ 27 ] [ 28 ] As its central tenet, aromatic transition state theory holds that 'allowed' pericyclic reactions proceed via transition states with aromatic character, while 'forbidden' pericyclic reactions would encounter transition states that are antiaromatic in nature. In the Dewar-Zimmerman analysis, one is concerned with the topology of the transition state of the pericyclic reaction. If the transition state involves 4 n electrons, the Möbius topology is aromatic and the Hückel topology is antiaromatic, while if the transition state involves 4 n + 2 electrons, the Hückel topology is aromatic and the Möbius topology is antiaromatic. The parity of the number of phase inversions (described in detail below) in the transition state determines its topology. A Möbius topology involves an odd number of phase inversions whereas a Hückel topology involves an even number of phase inversions. In connection with Woodward–Hoffmann terminology, the number of antarafacial components and the number of phase inversions always have the same parity. [ 24 ] Consequently, an odd number of antarafacial components gives Möbius topology, while an even number gives Hückel topology. Thus, to restate the results of aromatic transition state theory in the language of Woodward and Hoffmann, a 4 n -electron reaction is thermally allowed if and only if it has an odd number of antarafacial components (i.e., Möbius topology); a (4 n + 2)-electron reaction is thermally allowed if and only if it has an even number of antarafacial components (i.e., Hückel topology). Procedure for Dewar-Zimmerman analysis (examples shown on the right): Step 1. Shade in all basis orbitals that are part of the pericyclic system. The shading can be arbitrary. In particular the shading does not need to reflect the phasing of the polyene MOs; each basis orbital simply need to have two oppositely phased lobes in the case of p or sp x hybrid orbitals, or a single phase in the case of an s orbital. Step 2. Draw connections between the lobes of basis orbitals that are geometrically well-disposed to interact at the transition state. The connections to be made depend on the transition state topology. (For example, in the figure, different connections are shown in the cases of con- and disrotatory electrocyclization.) Step 3. Count the number of connections that occur between lobes of opposite shading: each of these connections constitutes a phase inversion. If the number of phase inversions is even, the transition state is Hückel, while if the number of phase inversions is odd, the transition state is Möbius. Step 4. Conclude that the pericyclic reaction is allowed if the electron count is 4 n + 2 and the transition state is Hückel, or if the electron count is 4 n and the transition state is Möbius; otherwise, conclude that the pericyclic reaction is forbidden. Importantly, any scheme of assigning relative phases to the basis orbitals is acceptable, as inverting the phase of any single orbital adds 0 or ±2 phase inversions to the total, an even number, so that the parity of the number of inversions (number of inversions modulo 2) is unchanged. Recently, the Woodward–Hoffmann rules have been reinterpreted using conceptual density functional theory (DFT). [ 29 ] [ 30 ] The key to the analysis is the dual descriptor function, proposed by Christophe Morell, André Grand and Alejandro Toro-Labbé [ 31 ] f ( 2 ) ( r ) = ∂ 2 ρ ( r ) ∂ N 2 {\displaystyle f^{(2)}(r)={\frac {\partial ^{2}\rho (r)}{\partial N^{2}}}} , the second derivative of the electron density ρ ( r ) {\displaystyle \rho (r)} with respect to the number of electrons N {\displaystyle N} . This response function is important as the reaction of two components A and B involving a transfer of electrons will depend on the responsiveness of the electron density to electron donation or acceptance, i.e. the derivative of the Fukui function f ( r ) = ∂ ρ ( r ) ∂ N {\displaystyle f(r)={\frac {\partial \rho (r)}{\partial N}}} . In fact, from a simplistic viewpoint, the dual descriptor function gives a readout on the electrophilicity or nucleophilicity of the various regions of the molecule. For f ( 2 ) > 0 {\displaystyle f^{(2)}>0} , the region is electrophilic, and for f ( 2 ) < 0 {\displaystyle f^{(2)}<0} , the region is nucleophilic. Using the frontier molecular orbital assumption and a finite difference approximation of the Fukui function, one may write the dual descriptor as This makes intuitive sense as if a region is better at accepting electrons than donating, then the LUMO must dominate and dual descriptor function will be positive. Conversely, if a region is better at donating electrons then the HOMO term will dominate and the descriptor will be negative. Notice that although the concept of phase and orbitals are replaced simply by the notion of electron density, this function still takes both positive and negative values. The Woodward–Hoffmann rules are reinterpreted using this formulation by matching favorable interactions between regions of electron density for which the dual descriptor has opposite signs. This is equivalent to maximizing predicted favorable interactions and minimizing repulsive interactions. For the case of a [4+2] cycloaddition, a simplified schematic of the reactants with the dual descriptor function colored (red=positive, blue=negative) is shown in the optimal supra/supra configuration to the left. This method correctly predicts the WH rules for the major classes of pericyclic reactions. In Chapter 12 of The Conservation of Orbital Symmetry , entitled "Violations," Woodward and Hoffmann famously stated: There are none! Nor can violations be expected of so fundamental a principle of maximum bonding. This pronouncement notwithstanding, it is important to recognize that the Woodward–Hoffmann rules are used to predict relative barrier heights, and thus likely reaction mechanisms, and that they only take into account barriers due to conservation of orbital symmetry. Thus it is not guaranteed that a WH symmetry-allowed reaction actually takes place in a facile manner. Conversely, it is possible, upon enough energetic input, to achieve an anti-Woodward-Hoffmann product. This is especially prevalent in sterically constrained systems, where the WH-product has an added steric barrier to overcome. For example, in the electrocyclic ring-opening of the dimethylbicyclo[0.2.3]heptene derivative ( 1 ), a conrotatory mechanism is not possible due to resulting angle strain and the reaction proceeds slowly through a disrotatory mechanism at 400 o C to give a cycloheptadiene product. [ 2 ] Violations may also be observed in cases with very strong thermodynamic driving forces. The decomposition of dioxetane-1,2-dione to two molecules of carbon dioxide, famous for its role in the luminescence of glowsticks , has been scrutinized computationally. In the absence of fluorescers, the reaction is now believed to proceed in a concerted (though asynchronous) fashion, via a retro-[2+2]-cycloaddition that formally violates the Woodward–Hoffmann rules. [ 32 ] Similarly, a recent paper describes how mechanical stress can be used to reshape chemical reaction pathways to lead to products that apparently violate Woodward–Hoffman rules. [ 33 ] In this paper, they use ultrasound irradiation to induce a mechanical stress on link-functionalized polymers attached syn or anti on the cyclobutene ring. Computational studies predict that the mechanical force, resulting from friction of the polymers, induces bond lengthening along the reaction coordinate of the conrotatory mechanism in the anti-bisubstituted-cyclobutene, and along the reaction coordinate of the disrotatory mechanism in the syn-bisubstituted-cyclobutene. Thus in the syn-bisubstituted-cyclobutene, the anti -WH product is predicted to be formed. This computational prediction was backed up by experiment on the system below. Link-functionalized polymers were conjugated to cis benzocyclobutene in both syn- and anti- conformations. As predicted, both products gave the same (Z,Z) product as determined by quenching by a stereospecific Diels-Alder reaction with the substituted maleimide. In particular, the syn-substituted product gave the anti-WH product, presumably as the mechanical stretching along the coordinate of the disrotatory pathway lowered the barrier of the reaction under the disrotatory pathway enough to bias that mechanism. It has been stated that Elias James Corey , also a Nobel Prize winner, feels he is responsible for the ideas that laid the foundation for this research, and that Woodward unfairly neglected to credit him in the discovery. In a 2004 memoir published in the Journal of Organic Chemistry , [ 34 ] Corey makes his claim to priority of the idea: "On May 4, 1964, I suggested to my colleague R. B. Woodward a simple explanation involving the symmetry of the perturbed (HOMO) molecular orbitals for the stereoselective cyclobutene to 1,3-butadiene and 1,3,5-hexatriene to cyclohexadiene conversions that provided the basis for the further development of these ideas into what became known as the Woodward–Hoffmann rules". Corey, then 35, was working into the evening on Monday, May 4, as he and the other driven chemists often did. At about 8:30 p.m., he dropped by Woodward's office, and Woodward posed a question about how to predict the type of ring a chain of atoms would form. After some discussion, Corey proposed that the configuration of electrons governed the course of the reaction. Woodward insisted the solution would not work, but Corey left drawings in the office, sure that he was on to something. [ 35 ] "I felt that this was going to be a really interesting development and was looking forward to some sort of joint undertaking," he wrote. But the next day, Woodward flew into Corey's office as he and a colleague were leaving for lunch and presented Corey's idea as his own – and then left. Corey was stunned. In a 2004 rebuttal published in the Angewandte Chemie , [ 36 ] Roald Hoffmann denied the claim: he quotes Woodward from a lecture given in 1966 saying: "I REMEMBER very clearly—and it still surprises me somewhat—that the crucial flash of enlightenment came to me in algebraic, rather than in pictorial or geometric form. Out of the blue, it occurred to me that the coefficients of the terminal terms in the mathematical expression representing the highest occupied molecular orbital of butadiene were of opposite sign, while those of the corresponding expression for hexatriene possessed the same sign. From here it was but a short step to the geometric, and more obviously chemically relevant, view that in the internal cyclisation of a diene, the top face of one terminal atom should attack the bottom face of the other, while in the triene case, the formation of a new bond should involve the top (or pari passu, the bottom) faces of both terminal atoms." In addition, Hoffmann points out that in two publications from 1963 [ 37 ] and 1965, [ 38 ] Corey described a total synthesis of the compound dihydrocostunolide. Although they describe an electrocyclic reaction, Corey has nothing to offer with respect to explaining the stereospecificity of the synthesis. This photochemical reaction involving 6 = 4×1 + 2 electrons is now recognized as conrotatory.
https://en.wikipedia.org/wiki/Woodward–Hoffmann_rules
A Woodworking machine is a machine that is intended to process wood . These machines are usually powered by electric motors and are used extensively in woodworking . Sometimes grinding machines (used for grinding down to smaller pieces) are also considered a part of woodworking machinery. [ 1 ] These machines are used both in small-scale commercial production of timber products and by hobbyists. Most of these machines may be used on solid timber and on composite products. Machines can be divided into the bigger stationary machines where the machine remains stationary while the material is moved over the machine, and hand-held power tools , where the tool is moved over the material. These machines are used in large-scale manufacturing of cabinets and other wooden or panel products. Panel dividing equipment , classified by number of beam, loading system, saw carriage speed Double end tenoner, classified by conveyor type Panel edge processing equipment, classified by conveyor speed classified by number of boring heads
https://en.wikipedia.org/wiki/Woodworking_machine
A woody plant is a plant that produces wood as its structural tissue and thus has a hard stem. [ 1 ] In cold climates, woody plants further survive winter or dry season above ground, as opposed to herbaceous plants that die back to the ground until spring . [ 2 ] Woody plants are usually trees , shrubs , or lianas . These are usually perennial plants [ 3 ] whose stems and larger roots are reinforced with wood produced from secondary xylem . The main stem, larger branches, and roots of these plants are usually covered by a layer of bark . Wood is a structural tissue that allows woody plants to grow from above ground stems year after year, thus making some woody plants the largest and tallest terrestrial plants . [ 3 ] Woody plants, like herbaceous perennials, typically have a dormant period of the year when growth does not take place. This occurs in temperate and continental due to freezing temperatures and lack of daylight during the winter months. [ 4 ] Meanwhile, dormancy in subtropical and tropical climates is due to the dry season; when low precipitation limits water available for growth. [ 5 ] The dormant period will be accompanied by abscission (if the plant is deciduous ). [ 6 ] Evergreen plants do not lose all their leaves at once (they instead shed them gradually over the growing season ), however growth virtually halts during the dormant season. Many woody plants native to the subtropics and tropics are evergreen due to year-round warm temperatures and rainfall. [ 7 ] However, in many regions with a tropical savanna climate or a monsoon subtropical climate , a lengthy dry season precludes evergreen vegetation, instead promoting the predominance of deciduous trees. [ 8 ] During the fall months, each stem in a deciduous plant cuts off the flow of nutrients and water to the leaves. This causes them to change colors as the chlorophyll in the leaves breaks down. [ 9 ] Special cells are formed that sever the connection between the leaf and stem, so that it will easily detach. Evergreen plants do not shed their leaves, merely go into a state of low activity during the dormant season (in order to acclimate to cold temperatures or low rainfall ). [ 10 ] During spring , the roots begin sending nutrients back up to the canopy . [ 11 ] When the growing season resumes, either with warm weather or the wet season, the plant will break bud by sending out new leaf or flower growth. This is accompanied by growth of new stems from buds on the previous season's wood. In colder climates, most stem growth occurs during spring and early summer. When the dormant season begins, the new growth hardens off and becomes woody. Once this happens, the stem will never grow in length again, however it will keep expanding in diameter for the rest of the plant's life. Most woody plants native to colder climates have distinct growth rings produced by each year's production of new vascular tissue. Only the outer handful of rings contain living tissue (the cambium , xylem , phloem , and sapwood ). Inner layers have heartwood, dead tissue that serves merely as structural support. Stem growth primarily occurs out of the terminal bud on the tip of the stem. Axillary buds are suppressed by the terminal bud and produce less growth, unless it is removed by human or natural action. Without a terminal bud, the side buds will have nothing to suppress them and begin rapidly sending out growth, if cut during spring . By late summer and early autumn , most active growth for the season has ceased and pruning a stem will result in little or no new growth. Winter buds are formed when the dormant season begins. Depending on the plant, these buds contain either new leaf growth, new flowers , or both. Terminal buds have a stronger dominance on conifers than broadleaf plants, thus conifers will normally grow a single straight trunk without forking or large side or lateral branches. As a woody plant grows, it will often lose lower leaves and branches as they become shaded out by the canopy (biology) . If a given stem is producing an insufficient amount of energy for the plant, the roots will "abort" it by cutting off the flow of water and nutrients , causing it to gradually die. Below ground , the root system expands each growing season in much the same manner as the stems . The roots grow in length and send out smaller lateral roots. At the end of the growing season, the newly grown roots become woody and cease future length expansion, but will continue to expand in diameter. However, unlike the above-ground portion of the plant, the root system continues to grow, although at a slower rate, throughout the dormant season. In cold-weather climates , root growth will continue as long as temperatures are above 2 °C (36 °F). Wood is primarily composed of xylem cells with cell walls made of cellulose and lignin . Xylem is a vascular tissue which moves water and nutrients from the roots to the leaves. Most woody plants form new layers of woody tissue each year, and so increase their stem diameter from year to year, with new wood deposited on the inner side of a vascular cambium layer located immediately beneath the bark. However, in some monocotyledons such as palms and dracaenas , the wood is formed in bundles scattered through the interior of the trunk. Stem diameter increases continuously throughout the growing season and halts during the dormant period. [ 12 ] The symbol for a woody plant, based on Species Plantarum by Linnaeus is , which is also the astronomical symbol for the planet Saturn . [ 13 ]
https://en.wikipedia.org/wiki/Woody_plant
The wool combing machine was invented by Edmund Cartwright , the inventor of the power loom , in Doncaster . The machine was used to arrange and lay parallel by length the fibers of wool , prior to further treatment. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Cartwright's invention, nicknamed "Big Ben," was originally patented in April 1790, with subsequent patents following in December 1790 and May 1792 as the machine's design was refined by Cartwright. [ 1 ] [ 2 ] [ 5 ] [ 4 ] This machine is the first example of mechanization of the wool combing stage of the textile manufacturing process, and a significant achievement for the textile industry . [ 2 ] [ 5 ] Cartwright's machine was described as doing the work of 20 hand-combers. [ 6 ] The wool combing machine was improved refined by many later inventors, including Josué Heilmann , Samuel Cunliffe Lister , Isaac Holden , and James Noble . [ 2 ] [ 4 ] [ 7 ] [ 8 ] This industry -related article is a stub . You can help Wikipedia by expanding it . This textile arts article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wool_combing_machine
Woollins' reagent is an organic compound containing phosphorus and selenium . Analogous to Lawesson's reagent , it is used mainly as a selenation reagent. It is named after John Derek Woollins . Woollins' reagent is commercially available. It can also be conveniently prepared in the laboratory by heating a mixture of dichlorophenylphosphine and sodium selenide (Na 2 Se), (itself prepared from reacting elementary selenium with sodium in liquid ammonia ). [ 2 ] An alternative synthesis is the reaction of the pentamer (PPh) 5 ( pentaphenylcyclopentaphosphine ) with elemental selenium. [ 3 ] The main use of Woollins' reagent is the selenation of carbonyl compounds. [ 4 ] For instance, Woollins' reagent will convert a carbonyl into a selenocarbonyl . Additionally, Woollins' reagent has been used to selenonate carboxylic acids , alkenes , alkynes , and nitriles . [ 5 ]
https://en.wikipedia.org/wiki/Woollins'_reagent
Wootz steel is a crucible steel characterized by a pattern of bands and high carbon content. These bands are formed by sheets of microscopic carbides within a tempered martensite or pearlite matrix in higher- carbon steel , or by ferrite and pearlite banding in lower-carbon steels. It was a pioneering steel alloy developed in southern India in the mid-1st millennium BC and exported globally. [ 1 ] Wootz steel originated in the mid-1st millennium BC in India, wootz steel was made in Golconda in Telangana , Karnataka , Tamilnadu and Sri Lanka. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The steel was exported as cakes of steely iron that came to be known as "wootz". [ 6 ] The method was to heat black magnetite ore in the presence of carbon in a sealed clay crucible inside a charcoal furnace to completely remove slag . An alternative was to smelt the ore first to give wrought iron , then heat and hammer it to remove slag. The carbon source was bamboo and leaves from plants such as Avārai . [ 6 ] [ 7 ] Locals in Sri Lanka adopted the production methods of creating wootz steel from the Cheras by the 5th century BC. [ 8 ] In Sri Lanka, this early steel-making method employed a unique wind furnace, driven by the monsoon winds. Production sites from antiquity have emerged, in places such as Anuradhapura , Tissamaharama and Samanalawewa , as well as imported artifacts of ancient iron and steel from Kodumanal. Recent archaeological excavations (2018) of the Yodhawewa site (in Mannar District) discovered the lower half of a spherical furnace, crucible fragments, and lid fragments related to the crucible steel production through the carburization process. [ 9 ] In the South East of Sri Lanka, there were some of the oldest iron and steel artifacts and production processes to the island from the classical period . [ 10 ] [ 11 ] [ 12 ] [ 13 ] Trade between India and Sri Lanka through the Arabian Sea introduced wootz steel to Arabia. The term muhannad مهند or hendeyy هندي in pre-Islamic and early Islamic Arabic refers to sword blades made from Indian steel, which were highly prized, and are attested in Arabic poetry . Further trade spread the technology to the city of Damascus , where an industry developed for making weapons of this steel. This led to the development of Damascus steel . The 12th century Arab traveler Edrisi mentioned the "Hinduwani" or Indian steel as the best in the world. [ 14 ] Arab accounts also point to the fame of 'Teling' steel, which can be taken to refer to the region of Telangana . The Golconda region of Telangana clearly being the nodal center for the export of wootz steel to West Asia. [ 14 ] Another sign of its reputation is seen in a Persian phrase – to give an "Indian answer", meaning "a cut with an Indian sword". [ 8 ] Wootz steel was widely exported and traded throughout ancient Europe and the Arab world , and became particularly famous in the Middle East . [ 8 ] From the 17th century onwards, several European travelers observed the steel manufacturing in South India, at Mysore , Malabar and Golconda . The word Wootz was derived from the Tamil word urukku based on the meaning "melt, dissolve". Other Dravidian languages have similar-sounding words for steel: ukku in Kannada [ 15 ] [ 16 ] and Telugu , and urukku in Malayalam . When Benjamin Heyne inspected the Indian steel in Ceded Districts and other Kannada-speaking areas, he was informed that the steel was ucha kabbina ("superior iron"), also known as ukku tundu in Mysore. [ 17 ] [ 18 ] Legends of wootz steel and Damascus swords aroused the curiosity of the European scientific community from the 17th to the 19th century. The use of high- carbon alloys was little known in Europe [ 19 ] previously and thus the research into wootz steel played an important role in the development of modern English, French and Russian metallurgy . [ 20 ] In 1790, samples of wootz steel were received by Sir Joseph Banks , president of the British Royal Society , sent by Helenus Scott . These samples were subjected to scientific examination and analysis by several experts. [ 21 ] [ 22 ] [ 23 ] Specimens of daggers and other weapons were sent by the Rajas of India to the Great Exhibition in London in 1851 and 1862 International Exhibition . Though the arms of the swords were beautifully decorated and jeweled, they were most highly prized for the quality of their steel. The swords of the Sikhs were said to bear bending and crumpling, and yet be fine and sharp. [ 8 ] Wootz is characterized by a pattern caused by bands of clustered Fe 3 C particles made by melting of low levels of carbide-forming elements. [ 24 ] Wootz contains greater carbonaceous matter than common qualities of cast steel. [ citation needed ] The distinct patterns of wootz steel that can be made through forging are wave, ladder, and rose patterns with finely spaced bands. However, with hammering, dyeing, and etching further customized patterns were made. [ 25 ] The presence of cementite nanowires and carbon nanotubes has been identified by Peter Pepler of TU Dresden in the microstructure of wootz steel. [ 26 ] There is a possibility of an abundance of ultrahard metallic carbides in the steel matrix precipitating out in bands. Wootz swords were renowned for their sharpness and toughness . T. H. Henry analyzed and recorded the composition of wootz steel samples provided by the Royal School of Mines . Recording: Wootz steel was analyzed by Michael Faraday and recorded to contain 0.01-0.07% aluminium . Faraday, Messrs (et al.), and Stodart hypothesized that aluminium was needed in the steel and was important in forming the excellent properties of wootz steel. However T. H. Henry deduced that presence of aluminium in the wootz used by these studies was due to slag , forming as silicates. Percy later reiterated that the quality of wootz steel does not depend on the presence of aluminium. [ 27 ] Wootz steel has been reproduced and studied in depth by the Royal School of Mines. [ 28 ] Dr. Pearson was the first to chemically examine wootz in 1795 and he published his contributions to the Philosophical Transactions of the Royal Society. [ 29 ] Russian metallurgist Pavel Petrovich Anosov (see Bulat steel ) was almost able to reproduce ancient wootz steel with nearly all of its properties and the steel he created was very similar to traditional wootz. [ citation needed ] He documented four different methods of producing wootz steel that exhibited traditional patterns. [ citation needed ] He died before he could fully document and publish his research. Oleg Sherby and Jeff Wadsworth and Lawrence Livermore National Laboratory have all done research, attempting to create steels with characteristics similar to wootz, but without success. [ citation needed ] J.D Verhoeven and Alfred Pendray reconstructed methods of production, proved the role of impurities of ore in the pattern creation, and reproduced wootz steel with patterns microscopically and visually identical to one of the ancient blade patterns. [ citation needed ] Reibold et al.'s analyses spoke of the presence of carbon nanotubes enclosing nanowires of cementite, with the trace elements/impurities of vanadium , molybdenum , chromium etc. contributing to their creation, in cycles of heating/cooling/forging. This resulted in a hard high carbon steel that remained malleable. [ 30 ] There are smiths who are now consistently producing wootz steel blades visually identical to the old patterns. [ 31 ] Steel manufactured in Kutch (in present-day India) particularly enjoyed a widespread reputation, similar to those manufactured at Glasgow and Sheffield . [ 8 ] Wootz was made over nearly a 2,000-year period (the oldest sword samples date to around 200 CE) [ citation needed ] and the methods of production of ingots, the ingredients, and the methods of forging varied from one area to the next. Some wootz blades displayed a pattern, while some did not. [ citation needed ] Heat treating was quite different from forging, and there were many different patterns that were created by the various smiths who spanned from China to Scandinavia. [ citation needed ] With fellow experts, the Georgian-Dutch master armourer Gocha Laghidze developed a new method to reintroduce "Georgian Damascus steel". In 2010, he and his colleagues gave a masterclass on this at the Royal Academy of Fine Arts in Antwerp. [ 32 ] [ 33 ]
https://en.wikipedia.org/wiki/Wootz_steel
WordMARC Composer [ 1 ] [ 2 ] was a scientifically oriented word processor [ 3 ] developed by MARC Software, an offshoot of MARC Analysis Research Corporation [ 4 ] (which specialized in high end Finite Element Analysis software for mechanical engineering). It ran originally on minicomputers such as Prime and Digital Equipment Corporation VAX . When the IBM PC emerged as the platform of choice for word processing, WordMARC allowed users to easily move documents from a minicomputer (where they could be easily shared) to PCs. WordMARC was the creation of Pedro Marcal, [ 5 ] who pioneered work in finite element analysis and needed a technical word processor that both supported complex notations [ 6 ] and was capable of running on minicomputers and other high-end machines such as Alliant and AT&T. [ 7 ] WordMARC was originally known as MUSE (MARC Universal Screen Editor), [ 8 ] but the name was changed because of a trademark conflict with another company when the product was ported to the IBM PC . In comparison with WordPerfect , WordMARC's formatting metadata was always hidden. This was considered friendlier to novice users, and less likely to result in mangled documents. Although it was billed as a WYSIWYG system, it did not provide for display of proportional fonts. It did, however allow the use of proportional fonts by adjusting the margins based on the current text size using an estimated average character width in version 1. Primeword v2 had font character width tables, and were given a utility that could generate them from HP font files. Advanced features (for its time) included Document Assembly (maintaining each chapter of a book in separate files and combining them for printing or to produce a table of contents or index), automatic paragraph numbering, footnotes, endnotes, support for mixed fonts, multi-level equations and scientific characters. [ 9 ] An early version offered support for Japanese characters. The Unix version of WordMARC supported PostScript . [ 10 ] In 1999 the company became MSC Software [ 11 ] and in May was purchased by MacNeal-Schwendler Corp. [ 12 ] [ 4 ]
https://en.wikipedia.org/wiki/WordMARC
In computer hardware, a word mark or flag is a bit in each memory location on some early variable word length computers (e.g., IBM 1401 , 1410 , 1620 ) used to mark the end of a word . [ 1 ] Sometimes the actual bit used as a word mark on a given machine is not called word mark , but has a different name (e.g., flag on the IBM 1620, because on this machine it is multipurpose). [ 2 ] The term word mark should not be confused with group mark or with record mark , which are distinct characters.
https://en.wikipedia.org/wiki/Word_mark_(computer_hardware)
A word processor ( WP ) [ 1 ] [ 2 ] is a device or computer program that provides for input, editing, formatting, and output of text, often with some additional features. Early word processors were stand-alone devices dedicated to the function, but current word processors are word processor programs running on general purpose computers, including smartphones, tablets, laptops and desktop computers. The functions of a word processor program are typically between those of a simple text editor and a desktop publishing program; Many word processing programs have gained advanced features over time providing similar functionality to desktop publishing programs. [ 3 ] [ 4 ] [ 5 ] Common word processor programs include LibreOffice Writer , Google Docs and Microsoft Word . Word processors developed from mechanical machines, later merging with computer technology. [ 6 ] The history of word processing is the story of the gradual automation of the physical aspects of writing and editing, and then to the refinement of the technology to make it available to corporations and Individuals. The term word processing appeared in American offices in the early 1970s centered on the idea of streamlining the work to typists, but the meaning soon shifted toward the automation of the whole editing cycle. At first, the designers of word processing systems combined existing technologies with emerging ones to develop stand-alone equipment, creating a new business distinct from the emerging world of the personal computer. The concept of word processing arose from the more general data processing, which since the 1950s had been the application of computers to business administration. [ 7 ] Through history, there have been three types of word processors: mechanical, electronic and software. The first word processing device (a "Machine for Transcribing Letters" that appears to have been similar to a typewriter) was patented in 1714 by Henry Mill for a machine that was capable of "writing so clearly and accurately you could not distinguish it from a printing press". [ 8 ] More than a century later, another patent appeared in the name of William Austin Burt for the typographer . In the late 19th century, Christopher Latham Sholes [ 9 ] created the first recognizable typewriter, which was described as a "literary piano". [ 10 ] The only "word processing" these mechanical systems could perform was to change where letters appeared on the page, to fill in spaces that were previously left on the page, or to skip over lines. It was not until decades later that the introduction of electricity and electronics into typewriters began to help the writer with the mechanical part. The term "word processing" (translated from the German word Textverarbeitung ) itself was possibly created in the 1950s by Ulrich Steinhilper , a German IBM typewriter sales executive, or by an American electro-mechanical typewriter executive, George M. Ryan, who obtained a trademark registration in the USPTO for the phrase. [ 11 ] However, it did not make its appearance in 1960s office management or computing literature (an example of grey literature ), though many of the ideas, products, and technologies to which it would later be applied were already well known. Nonetheless, by 1971, the term was recognized by the New York Times [ 12 ] as a business " buzz word ". Word processing paralleled the more general "data processing", or the application of computers to business administration. Thus, by 1972, the discussion of word processing was common in publications devoted to business office management and technology; by the mid-1970s, the term would have been familiar to any office manager who consulted business periodicals. By the late 1960s, IBM had developed the IBM MT/ST (Magnetic Tape/Selectric Typewriter). It was a model of the IBM Selectric typewriter from earlier in 1961, but it came built into its own desk, integrated with magnetic tape recording and playback facilities along with controls and a bank of electrical relays. The MT/ST automated word wrap, but it had no screen. This device allowed a user to rewrite text that had been written on another tape, and it also allowed limited collaboration in the sense that a user could send the tape to another person to let them edit the document or make a copy. It was a revolution for the word processing industry. In 1969, the tapes were replaced by magnetic cards. These memory cards were inserted into an extra device that accompanied the MT/ST, able to read and record users' work. Throughout the 1960s and 70s, word processing began to slowly shift from glorified typewriters augmented with electronic features to become fully computer-based (although only with single-purpose hardware) with the development of several innovations. Just before the arrival of the personal computer (PC), IBM developed the floppy disk . In the 1970s, the first proper word-processing systems appeared, which allowed display and editing of documents on CRT screens . During this era, these early stand-alone word processing systems were designed, built, and marketed by several pioneering companies. Linolex Systems was founded in 1970 by James Lincoln and Robert Oleksiak. Linolex based its technology on microprocessors, floppy drives and software. It was a computer-based system for application in the word processing businesses and it sold systems through its own sales force. With a base of installed systems in over 500 sites, Linolex Systems sold 3 million units in 1975 — a year before the Apple computer was released. [ 13 ] At that time, the Lexitron Corporation also produced a series of dedicated word-processing microcomputers. Lexitron was the first to use a full-sized video display screen (CRT) in its models by 1978. Lexitron also used 5 1 ⁄ 4 inch floppy diskettes, which became the standard in the personal computer field. The program disk was inserted in one drive, and the system booted up . The data diskette was then put in the second drive. The operating system and the word processing program were combined in one file. [ 14 ] Another of the early word processing adopters was Vydec, which created in 1973 [ 15 ] the first modern text processor, the "Vydec Word Processing System". It had built-in multiple functions like the ability to share content by diskette and print it. [ further explanation needed ] The Vydec Word Processing System sold for $12,000 at the time, (about $60,000 adjusted for inflation). [ 16 ] The Redactron Corporation (organized by Evelyn Berezin in 1969) designed and manufactured editing systems, including correcting/editing typewriters, cassette and card units, and eventually a word processor called the Data Secretary. The Burroughs Corporation acquired Redactron in 1976. [ 17 ] A CRT-based system by Wang Laboratories became one of the most popular systems of the 1970s and early 1980s. The Wang system displayed text on a CRT screen, and incorporated virtually every fundamental characteristic of word processors as they are known today. While early computerized word processor system were often expensive and hard to use (that is, like the computer mainframes of the 1960s), the Wang system was a true office machine, affordable to organizations such as medium-sized law firms, and easily mastered and operated by secretarial staff. The phrase "word processor" rapidly came to refer to CRT-based machines similar to Wang's. Numerous machines of this kind emerged, typically marketed by traditional office-equipment companies such as IBM, Lanier (AES Data machines - re-badged), CPT, and NBI. All were specialized, dedicated, proprietary systems, with prices in the $10,000 range. Cheap general-purpose personal computers were still the domain of hobbyists. In Japan, even though typewriters with Japanese writing system had widely been used for businesses and governments, they were limited to specialists and required special skills due to the wide variety of letters, until computer-based devices came onto the market. In 1977, Sharp showcased a prototype of a computer-based word processing dedicated device with Japanese writing system in Business Show in Tokyo. [ 18 ] [ 19 ] Toshiba released the first Japanese word processor JW-10 [ jp ] in February 1979. [ 20 ] The price was 6,300,000 JPY, equivalent to US$45,000. This is selected as one of the milestones of IEEE . [ 21 ] The Japanese writing system uses a large number of kanji (logographic Chinese characters) which require 2 bytes to store, so having one key per each symbol is infeasible. Japanese word processing became possible with the development of the Japanese input method (a sequence of keypresses, with visual feedback, which selects a character) -- now widely used in personal computers. Oki launched OKI WORD EDITOR-200 in March 1979 with this kana-based keyboard input system. In 1980 several electronics and office equipment brands including entered this rapidly growing market with more compact and affordable devices. For instance, NEC introduced the NWP-20 [ jp ] , and Fujitsu launched the Fujitsu OASYS [ jp ] . While the average unit price in 1980 was 2,000,000 JPY (US$14,300), it was dropped to 164,000 JPY (US$1,200) in 1985. [ 22 ] Even after personal computers became widely available, Japanese word processors remained popular as they tended to be more portable (an "office computer" was initially too large to carry around), and become commonplace for business and academics, even for private individuals in the second half of the 1980s. [ 23 ] The phrase "word processor" has been abbreviated as "Wa-pro" or "wapuro" in Japanese. The final step in word processing came with the advent of the personal computer in the late 1970s and 1980s and with the subsequent creation of word processing software. Word processing software that would create much more complex and capable output was developed and prices began to fall, making them more accessible to the public. By the late 1970s, computerized word processors were still primarily used by employees composing documents for large and midsized businesses (e.g., law firms and newspapers). Within a few years, the falling prices of PCs made word processing available for the first time to all writers in the convenience of their homes. The first word processing program for personal computers ( microcomputers ) was Electric Pencil , from Michael Shrayer Software , which went on sale in December 1976. In 1978, WordStar appeared and because of its many new features soon dominated the market. WordStar was written for the early CP/M (Control Program–Micro) operating system, ported to CP/M-86 , then to MS-DOS , and was the most popular word processing program until 1985 when WordPerfect sales first exceeded WordStar sales. Early word processing software was not as intuitive as word processor devices. Most early word processing software required users to memorize semi-mnemonic key combinations rather than pressing keys such as "copy" or "bold". Moreover, CP/M lacked cursor keys; for example WordStar used the E-S-D-X-centered "diamond" for cursor navigation. A notable exception was the software Lexitype for MS-DOS that took inspiration from the Lexitron dedicated word processor's user interface and which mapped individual functions to particular keyboard function keys , and a set of stick-on "keycaps" describing the function were provided with the software. Lexitype was popular with large organizations that had previously used the Lexitron. [ 24 ] Eventually, the price differences between dedicated word processors and general-purpose PCs, and the value added to the latter by software such as " killer app " spreadsheet applications, e.g. VisiCalc and Lotus 1-2-3 , were so compelling that personal computers and word processing software became serious competition for the dedicated machines and soon dominated the market. In the late 1980s, innovations such as the advent of laser printers , a "typographic" approach to word processing ( WYSIWYG - What You See Is What You Get), using bitmap displays with multiple fonts (pioneered by the Xerox Alto computer and Bravo word processing program), and graphical user interfaces such as “copy and paste” (another Xerox PARC innovation, with the Gypsy word processor). These were popularized by MacWrite on the Apple Macintosh in 1983, and Microsoft Word on the IBM PC in 1984. These were probably the first true WYSIWYG word processors to become known to many people. Of particular interest also is the standardization of TrueType fonts used in both Macintosh and Windows PCs. While the publishers of the operating systems provide TrueType typefaces, they are largely gathered from traditional typefaces converted by smaller font publishing houses to replicate standard fonts. Demand for new and interesting fonts, which can be found free of copyright restrictions, or commissioned from font designers, developed. The growing popularity of the Windows operating system in the 1990s later took Microsoft Word along with it. Originally called "Microsoft Multi-Tool Word", this program quickly became a synonym for “word processor”. Early in the 21st century, Google Docs popularized the transition to online or offline web browser based word processing. This was enabled by the widespread adoption of suitable internet connectivity in businesses and domestic households and later the popularity of smartphones . Google Docs enabled word processing from within any vendor's web browser, which could run on any vendor's operating system on any physical device type including tablets and smartphones, although offline editing is limited to a few Chromium based web browsers. Google Docs also enabled the significant growth of use of information technology such as remote access to files and collaborative real-time editing , both becoming simple to do with little or no need for costly software and specialist IT support.
https://en.wikipedia.org/wiki/Word_processing
A word processor is an electronic device (later a computer software application ) for text, composing, editing, formatting, and printing. The word processor was a stand-alone office machine developed in the 1960s, combining the keyboard text-entry and printing functions of an electric typewriter with a recording unit, either tape or floppy disk (as used by the Wang machine) with a simple dedicated computer processor for the editing of text. [ 1 ] Although features and designs varied among manufacturers and models, and new features were added as technology advanced, the first word processors typically featured a monochrome display and the ability to save documents on memory cards or diskettes . Later models introduced innovations such as spell-checking programs, and improved formatting options. As the more versatile combination of personal computers and printers became commonplace, and computer software applications for word processing became popular, most business machine companies stopped manufacturing dedicated word processor machines. In 2009 there were only two U.S. companies, Classic and AlphaSmart , which still made them. [ 2 ] [ needs update ] Many older machines, however, remain in use. Since 2009, Sentinel has offered a machine described as a "word processor", but it is more accurately a highly specialised microcomputer used for accounting and publishing. [ 3 ] In 2014, U.S. company Astrohaus launched the Freewrite series of electronic word processors. [ 4 ] Word processing was one of the earliest applications for the personal computer in office productivity, and was the most widely used application on personal computers until the World Wide Web rose to prominence in the mid-1990s. Although the early word processors evolved to use tag-based markup for document formatting, most modern word processors take advantage of a graphical user interface providing some form of what-you-see-is-what-you-get ("WYSIWYG") editing. Most are powerful systems consisting of one or more programs that can produce a combination of images , graphics and text, the latter handled with type-setting capability. Typical features of a modern word processor include multiple font sets, spell checking, grammar checking, a built-in thesaurus, automatic text correction , web integration, HTML conversion, pre-formatted publication projects such as newsletters and to-do lists, and much more. Microsoft Word is the most widely used word processing software according to a user tracking system built into the software. [ 5 ] Microsoft estimates that roughly half a billion people use the Microsoft Office suite, [ 6 ] which includes Word. Many other word processing applications exist, including WordPerfect (which dominated the market from the mid-1980s to early-1990s on computers running Microsoft's MS-DOS operating system , and still (2014) is favored for legal applications), Apple's Pages application, and open source applications such as OpenOffice.org Writer , LibreOffice Writer , AbiWord , KWord , and LyX . Web-based word processors such as Office Online or Google Docs are a relatively new category. Word processors evolved dramatically once they became software programs rather than dedicated machines. They can usefully be distinguished from text editors , the category of software they evolved from. [ 7 ] [ 8 ] A text editor is a program that is used for typing, copying, pasting, and printing text (a single character, or strings of characters). Text editors do not format lines or pages. (There are extensions of text editors which can perform formatting of lines and pages: batch document processing systems, starting with TJ-2 and RUNOFF and still available in such systems as LaTeX and Ghostscript , as well as programs that implement the paged-media extensions to HTML and CSS ). Text editors are now used mainly by programmers , website designers, computer system administrators, and, in the case of LaTeX , by mathematicians and scientists (for complex formulas and for citations in rare languages). They are also useful when fast startup times, small file sizes, editing speed, and simplicity of operation are valued, and when formatting is unimportant. Due to their use in managing complex software projects, text editors can sometimes provide better facilities for managing large writing projects than a word processor. [ 9 ] Word processing added to the text editor the ability to control type style and size, to manage lines (word wrap), to format documents into pages, and to number pages. Functions now taken for granted were added incrementally, sometimes by purchase of independent providers of add-on programs. Spell checking, grammar checking and mail merge were some of the most popular add-ons for early word processors. Word processors are also capable of hyphenation, and the management and correct positioning of footnotes and endnotes. More advanced features found in recent word processors include: Later desktop publishing programs were specifically designed with elaborate pre-formatted layouts for publication, offering only limited options for changing the layout, while allowing users to import text that was written using a text editor or word processor, or type the text in themselves. Word processors are descended from the Friden Flexowriter , which had two punched tape stations and permitted switching from one to the other (thus enabling what was called the "chain" or "form letter", one tape containing names and addresses, and the other the body of the letter to be sent). It did not wrap words, which was begun by IBM's Magnetic Tape Selectric Typewriter (later, Magnetic Card Selectric Typewriter). Expensive Typewriter , written and improved between 1961 and 1962 by Steve Piner and L. Peter Deutsch , was a text editing program that ran on a DEC PDP-1 computer at MIT . Since it could drive an IBM Selectric typewriter (a letter-quality printer), it may be considered the first-word processing program, but the term word processing itself was only introduced, by IBM 's Böblingen Laboratory in the late 1960s. [ citation needed ] In 1969, two software based text editing products (Astrotype and Astrocomp) were developed and marketed by Information Control Systems ( Ann Arbor Michigan). [ 10 ] [ 11 ] [ 12 ] Both products used the Digital Equipment Corporation PDP-8 mini computer, DECtape (4” reel) randomly accessible tape drives, and a modified version of the IBM Selectric typewriter (the IBM 2741 Terminal). These 1969 products preceded CRT display-based word processors. Text editing was done using a line numbering system viewed on a paper copy inserted in the Selectric typewriter. Evelyn Berezin invented a Selectric-based word processor in 1969, and founded the Redactron Corporation to market the $8,000 machine. [ 13 ] Redactron was sold to Burroughs Corporation in 1976, where the Redactron-II and -III were sold both as standalone units and as peripherals to the company's mainframe computers. [ 14 ] By 1971 word processing was recognized by the New York Times as a " buzz word ". [ 15 ] A 1974 Times article referred to "the brave new world of Word Processing or W/P. That's International Business Machines talk ... I.B.M. introduced W/P about five years ago for its Magnetic Tape Selectric Typewriter and other electronic razzle-dazzle." [ 16 ] IBM defined the term in a broad and vague way as "the combination of people, procedures, and equipment which transforms ideas into printed communications," and originally used it to include dictating machines and ordinary, manually operated Selectric typewriters. [ 17 ] By the early seventies, however, the term was generally understood to mean semiautomated typewriters affording at least some form of editing and correction, and the ability to produce perfect "originals". Thus, the Times headlined a 1974 Xerox product as a "speedier electronic typewriter", but went on to describe the product, which had no screen, [ 18 ] as "a word processor rather than strictly a typewriter, in that it stores copy on magnetic tape or magnetic cards for retyping, corrections, and subsequent printout". [ 19 ] In the late 1960s IBM provided a program called FORMAT for generating printed documents on any computer capable of running Fortran IV. Written by Gerald M. Berns, FORMAT was described in his paper "Description of FORMAT, a Text-Processing Program" (Communications of the ACM, Volume 12, Number 3, March, 1969) as "a production program which facilitates the editing and printing of 'finished' documents directly on the printer of a relatively small (64k) computer system. It features good performance, totally free-form input, very flexible formatting capabilities including up to eight columns per page, automatic capitalization, aids for index construction, and a minimum of nontext [control elements] items." Input was normally on punched cards or magnetic tape, with up to 80capital letters and non-alphabetic characters per card. The limited typographical controls available were implemented by control sequences; for example, letters were automatically converted to lower case unless they followed a full stop, that is, the "period" character. Output could be printed on a typical line printer in all-capitals — or in upper and lower case using a special ("TN") printer chain — or could be punched as a paper tape which could be printed, in better than line printer quality, on a Flexowriter. A workalike program with some improvements, DORMAT, was developed and used at University College London . [ 20 ] Electromechanical paper-tape-based equipment such as the Friden Flexowriter had long been available; the Flexowriter allowed for operations such as repetitive typing of form letters (with a pause for the operator to manually type in the variable information), [ 21 ] and when equipped with an auxiliary reader, could perform an early version of " mail merge ". Circa 1970 it began to be feasible to apply electronic computers to office automation tasks. IBM's Mag Tape Selectric Typewriter ( MT/ST ) and later Mag Card Selectric (MCST) were early devices of this kind, which allowed editing, simple revision, and repetitive typing, with a one-line display for editing single lines. [ 22 ] The first novel to be written on a word processor, the IBM MT/ST, was Len Deighton 's Bomber , published in 1970. [ 23 ] The New York Times , reporting on a 1971 business equipment trade show, said In 1971, a third of all working women in the United States were secretaries, and they could see that word processing would affect their careers. Some manufacturers, according to a Times article, urged that "the concept of 'word processing' could be the answer to Women's Lib advocates' prayers. Word processing will replace the 'traditional' secretary and give women new administrative roles in business and industry." [ 15 ] The 1970s word processing concept did not refer merely to equipment, but, explicitly, to the use of equipment for "breaking down secretarial labor into distinct components, with some staff members handling typing exclusively while others supply administrative support. A typical operation would leave most executives without private secretaries. Instead one secretary would perform various administrative tasks for three or more secretaries." [ 24 ] A 1971 article said that "Some [secretaries] see W/P as a career ladder into management; others see it as a dead-end into the automated ghetto; others predict it will lead straight to the picket line." The National Secretaries Association, which defined secretaries as people who "can assume responsibility without direct supervision", feared that W/P would transform secretaries into "space-age typing pools". The article considered only the organizational changes resulting from secretaries operating word processors rather than typewriters; the possibility that word processors might result in managers creating documents without the intervention of secretaries was not considered—not surprising in an era when few managers, but most secretaries, possessed keyboarding skills. [ 16 ] In 1972, Stephen Bernard Dorsey , Founder and President of Canadian company Automatic Electronic Systems (AES), introduced the world's first programmable word processor with a video screen. The real breakthrough by Dorsey's AES team was that their machine stored the operator's texts on magnetic disks. Texts could be retrieved from the disks simply by entering their names at the keyboard. More importantly, a text could be edited, for instance a paragraph moved to a new place, or a spelling error corrected, and these changes were recorded on the magnetic disk. The AES machine was actually a sophisticated computer that could be reprogrammed by changing the instructions contained within a few chips. [ 25 ] [ 26 ] In 1975, Dorsey started Micom Data Systems and introduced the Micom 2000 word processor. The Micom 2000 improved on the AES design by using the Intel 8080 single-chip microprocessor, which made the word processor smaller, less costly to build and supported multiple languages. [ 27 ] Around this time, DeltaData and Wang word processors also appeared, again with a video screen and a magnetic storage disk. The competitive edge for Dorsey's Micom 2000 was that, unlike many other machines, it was truly programmable. The Micom machine countered the problem of obsolescence by avoiding the limitations of a hard-wired system of program storage. The Micom 2000 utilized RAM, which was mass-produced and totally programmable. [ 28 ] The Micom 2000 was said to be a year ahead of its time when it was introduced into a marketplace that represented some pretty serious competition such as IBM, Xerox and Wang Laboratories . [ 29 ] In 1978, Micom partnered with Dutch multinational Philips and Dorsey grew Micom's sales position to number three among major word processor manufacturers, behind only IBM and Wang. [ 30 ] The Wang was not the first CRT-based machine nor were all of its innovations unique to Wang. In the early 1970s Linolex, Lexitron and Vydec introduced pioneering word-processing systems with CRT display editing. A Canadian electronics company, Automatic Electronic Systems, had introduced a product in 1972, but went into receivership a year later. In 1976, refinanced by the Canada Development Corporation , it returned to operation as AES Data , and went on to successfully market its brand of word processors worldwide until its demise in the mid-1980s. Its first office product, the AES-90, [ 31 ] combined for the first time a CRT-screen, a floppy-disk and a microprocessor, [ 25 ] [ 26 ] that is, the very same winning combination that would be used by IBM for its PC seven years later. [ citation needed ] The AES-90 software was able to handle French and English typing from the start, displaying and printing the texts side-by-side, a Canadian government requirement. The first eight units were delivered to the office of the then Prime Minister, Pierre Elliot Trudeau , in February 1974. [ citation needed ] Despite these predecessors, Wang's product was a standout, and by 1978 it had sold more of these systems than any other vendor. [ 32 ] The phrase "word processor" rapidly came to refer to CRT-based machines similar to the AES 90. Numerous machines of this kind emerged, typically marketed by traditional office-equipment companies such as IBM, Lanier (marketing AES Data machines, re-badged), CPT, and NBI. [ 33 ] All were specialized, dedicated, proprietary systems, priced around $10,000. Cheap general-purpose computers were still for hobbyists. Some of the earliest CRT-based machines used cassette tapes for removable-memory storage until floppy diskettes became available for this purpose - first the 8-inch floppy, then the 5¼-inch (drives by Shugart Associates and diskettes by Dysan ). Printing of documents was initially accomplished using IBM Selectric typewriters modified for ASCII -character input. These were later replaced by application-specific daisy wheel printers , first developed by Diablo , which became a Xerox company, and later by Qume . For quicker "draft" printing, dot-matrix line printers were optional alternatives with some word processors. Electric Pencil , released in December 1976, was the first word processor software for microcomputers. [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] Software-based word processors running on general-purpose personal computers gradually displaced dedicated word processors, and the term came to refer to software rather than hardware. Some programs were modeled after particular dedicated WP hardware. MultiMate , for example, was written for an insurance company that had hundreds of typists using Wang systems, and spread from there to other Wang customers. To adapt to the smaller, more generic PC keyboard, MultiMate used stick-on labels and a large plastic clip-on template to remind users of its dozens of Wang-like functions, using the shift, alt and ctrl keys with the 10 IBM function keys and many of the alphabet keys. Other early word-processing software required users to memorize semi-mnemonic key combinations rather than pressing keys labelled "copy" or "bold". (Many early PCs lacked cursor keys; WordStar famously used the E-S-D-X-centered "diamond" for cursor navigation, and modern vi-like editors encourage use of hjkl for navigation.) However, the price differences between dedicated word processors and general-purpose PCs, and the value added to the latter by software such as VisiCalc , were so compelling that personal computers and word processing software soon became serious competition for the dedicated machines. Word processing became the most popular use for personal computers, and unlike the spreadsheet (dominated by Lotus 1-2-3 ) and database ( dBase ) markets, WordPerfect , XyWrite , Microsoft Word , pfs:Write , and dozens of other word processing software brands competed in the 1980s; PC Magazine reviewed 57 different programs in one January 1986 issue. [ 35 ] Development of higher-resolution monitors allowed them to provide limited WYSIWYG —What You See Is What You Get, to the extent that typographical features like bold and italics, indentation, justification and margins were approximated on screen. The mid-to-late 1980s saw the spread of laser printers, a "typographic" approach to word processing, and of true WYSIWYG bitmap displays with multiple fonts (pioneered by the Xerox Alto computer and Bravo word processing program), PostScript , and graphical user interfaces (another Xerox PARC innovation, with the Gypsy word processor which was commercialised in the Xerox Star product range). Standalone word processors adapted by getting smaller and replacing their CRTs with small character-oriented LCD displays. Some models also had computer-like features such as floppy disk drives and the ability to output to an external printer. They also got a name change, now being called "electronic typewriters" and typically occupying a lower end of the market, selling for under US$200. During the late 1980s and into the 1990s the predominant word processing program was WordPerfect. [ 39 ] It had more than 50% of the worldwide market as late as 1995, but by 2000 Microsoft Word had up to 95% market share. [ 40 ] MacWrite , Microsoft Word, and other word processing programs for the bit-mapped Apple Macintosh screen, introduced in 1984, were probably the first true WYSIWYG word processors to become known to many people until the introduction of Microsoft Windows. Dedicated word processors eventually became museum pieces.
https://en.wikipedia.org/wiki/Word_processor_(electronic_device)
In linguistics , a word sense is one of the meanings of a word . For example, a dictionary may have over 50 different senses of the word " play ", each of these having a different meaning based on the context of the word's usage in a sentence , as follows: We went to see the play Romeo and Juliet at the theater. The coach devised a great play that put the visiting team on the defensive. The children went out to play in the park. In each sentence different collocates of "play" signal its different meanings. People and computers , as they read words, must use a process called word-sense disambiguation [ 1 ] [ 2 ] to reconstruct the likely intended meaning of a word. This process uses context to narrow the possible senses down to the probable ones. The context includes such things as the ideas conveyed by adjacent words and nearby phrases, the known or probable purpose and register of the conversation or document, and the orientation (time and place) implied or expressed. The disambiguation is thus context-sensitive . Advanced semantic analysis has resulted in a sub-distinction. A word sense corresponds either neatly to a seme (the smallest possible unit of meaning ) or a sememe (larger unit of meaning), and polysemy of a word of phrase is the property of having multiple semes or sememes and thus multiple senses. Often the senses of a word are related to each other within a semantic field . A common pattern is that one sense is broader and another narrower. This is often the case in technical jargon , where the target audience uses a narrower sense of a word that a general audience would tend to take in its broader sense. For example, in casual use " orthography " will often be glossed for a lay audience as " spelling ", but in linguistic usage "orthography" (comprising spelling, casing , spacing , hyphenation , and other punctuation ) is a hypernym of "spelling". Besides jargon, however, the pattern is common even in general vocabulary. Examples are the variation in senses of the term "wood wool" and in those of the word "bean" . This pattern entails that natural language can often lack explicitness about hyponymy and hypernymy . Much more than programming languages do, it relies on context instead of explicitness; meaning is implicit within a context. Common examples are as follows: Usage labels of " sensu " plus a qualifier , such as " sensu stricto " ("in the strict sense") or " sensu lato " ("in the broad sense") are sometimes used to clarify what is meant by a text. Polysemy entails a common historic root to a word or phrase. Broad medical terms usually followed by qualifiers , such as those in relation to certain conditions or types of anatomical locations are polysemic, and older conceptual words are with few exceptions highly polysemic (and usually beyond shades of similar meaning into the realms of being ambiguous ). Homonymy is where two separate-root words ( lexemes ) happen to have the same spelling and pronunciation .
https://en.wikipedia.org/wiki/Word_sense
In chemistry , work-up refers to the series of manipulations required to isolate and purify the product(s) of a chemical reaction . [ 1 ] The term is used colloquially to refer to these manipulations, which may include: The work-up steps required for a given chemical reaction may require one or more of these manipulations. Work-up steps are not always explicitly shown in reaction schemes. Written experimental procedures will describe work-up steps but will usually not formally refer to them as a work-up. The Grignard reaction between phenylmagnesium bromide ( 1 ) and carbon dioxide in the form of dry ice gives the conjugate base of benzoic acid ( 2 ). The desired product, benzoic acid ( 3 ), is obtained by the following work-up: [ 2 ] This dehydration reaction produces the desired alkene ( 3 ) from an alcohol ( 1 ). The reaction is performed in a distillation apparatus so the formed alkene product can be distilled off and collected as the reaction proceeds. The water produced by the reaction as well as some acid will co-distill, giving a distillate mixture ( 2 ). The product is isolated from the mixture by the following work-up: [ 3 ] The reaction between a secondary amine ( 1 ) and an acyl chloride ( 2 ) yields the desired amide ( 4 ) as shown below. The acyl chloride is added slowly to a solution of the amine and triethylamine in dichloromethane at 0 °C. The reaction is allowed to warm to room temperature and is stirred for 14 hours. The following manipulations are then performed on the crude reaction mixture ( 3 ) to isolate the desired product: [ 4 ]
https://en.wikipedia.org/wiki/Work-up
In science, work is the energy transferred to or from an object via the application of force along a displacement . In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if it has a component in the direction of the displacement of the point of application . A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force. [ 1 ] For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction. Both force and displacement are vectors . The work done is given by the dot product of the two vectors, where the result is a scalar . When the force F is constant and the angle θ between the force and the displacement s is also constant, then the work done is given by: W = F s cos ⁡ θ {\displaystyle W=Fs\cos {\theta }} If the force is variable, then work is given by the line integral : W = ∫ F ⋅ d s {\displaystyle W=\int \mathbf {F} \cdot d\mathbf {s} } where d s {\displaystyle d\mathbf {s} } is the tiny change in displacement vector. Work is a scalar quantity , [ 2 ] so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy. The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers , as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche ( On Mechanics ), in which he showed the underlying mathematical similarity of the machines as force amplifiers. [ 3 ] [ 4 ] He was the first to explain that simple machines do not create energy, only transform it. [ 3 ] Although work was not formally used until 1826, similar concepts existed before then. Early names for the same concept included moment of activity, quantity of action, latent live force, dynamic effect, efficiency , and even force . [ 5 ] In 1637, the French philosopher René Descartes wrote: [ 6 ] Lifting 100 lb one foot twice over is the same as lifting 200 lb one foot, or 100 lb two feet. In 1686, the German philosopher Gottfried Leibniz wrote: [ 7 ] The same force ["work" in modern terms] is necessary to raise body A of 1 pound (libra) to a height of 4 yards (ulnae), as is necessary to raise body B of 4 pounds to a height of 1 yard. In 1759, John Smeaton described a quantity that he called "power" "to signify the exertion of strength, gravitation, impulse, or pressure, as to produce motion." Smeaton continues that this quantity can be calculated if "the weight raised is multiplied by the height to which it can be raised in a given time," making this definition remarkably similar to Coriolis 's. [ 8 ] The term work (or mechanical work ), and the use of the work-energy principle in mechanics, was introduced in the late 1820s independently by French mathematician Gaspard-Gustave Coriolis and French Professor of Applied Mechanics Jean-Victor Poncelet . [ 9 ] [ 10 ] [ 11 ] Both scientists were pursuing a view of mechanics suitable for studying the dynamics and power of machines, for example steam engines lifting buckets of water out of flooded ore mines. According to Rene Dugas, French engineer and historian, it is to Solomon of Caux "that we owe the term work in the sense that it is used in mechanics now". [ 12 ] The concept of virtual work , and the use of variational methods in mechanics, preceded the introduction of "mechanical work" but was originally called "virtual moment". It was re-named once the terminology of Poncelet and Coriolis was adopted. [ 13 ] [ 14 ] The SI unit of work is the joule (J), named after English physicist James Prescott Joule (1818-1889). According to the International Bureau of Weights and Measures it is defined as "the work done when the point of application of 1 MKS unit of force [newton] moves a distance of 1 metre in the direction of the force." [ 15 ] The dimensionally equivalent newton-metre (N⋅m) is sometimes used as the measuring unit for work, but this can be confused with the measurement unit of torque . Usage of N⋅m is discouraged by the SI authority , since it can lead to confusion as to whether the quantity expressed in newton-metres is a torque measurement, or a measurement of work. [ 16 ] Another unit for work is the foot-pound , which comes from the English system of measurement. As the unit name suggests, it is the product of pounds for the unit of force and feet for the unit of displacement. One joule is approximately equal to 0.7376 ft-lbs. [ 17 ] [ 18 ] Non-SI units of work include the newton-metre, erg , the foot-pound, the foot-poundal , the kilowatt hour , the litre-atmosphere , and the horsepower-hour . Due to work having the same physical dimension as heat , occasionally measurement units typically reserved for heat or energy content, such as therm , BTU and calorie , are used as a measuring unit. The work W done by a constant force of magnitude F on a point that moves a displacement s in a straight line in the direction of the force is the product W = F ⋅ s {\displaystyle W=\mathbf {F} \cdot \mathbf {s} } For example, if a force of 10 newtons ( F = 10 N ) acts along a point that travels 2 metres ( s = 2 m ), then W = Fs = (10 N) (2 m) = 20 J . This is approximately the work done lifting a 1 kg object from ground level to over a person's head against the force of gravity. The work is doubled either by lifting twice the weight the same distance or by lifting the same weight twice the distance. Work is closely related to energy . Energy shares the same unit of measurement with work (Joules) because the energy from the object doing work is transferred to the other objects it interacts with when work is being done. [ 18 ] The work–energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in kinetic energy is caused by an equal amount of negative work done by the resultant force. Thus, if the net work is positive, then the particle's kinetic energy increases by the amount of the work. If the net work done is negative, then the particle's kinetic energy decreases by the amount of work. [ 19 ] From Newton's second law , it can be shown that work on a free (no fields), rigid (no internal degrees of freedom) body, is equal to the change in kinetic energy E k corresponding to the linear velocity and angular velocity of that body, W = Δ E k . {\displaystyle W=\Delta E_{\text{k}}.} The work of forces generated by a potential function is known as potential energy and the forces are said to be conservative . Therefore, work on an object that is merely displaced in a conservative force field , without change in velocity or rotation, is equal to minus the change of potential energy E p of the object, W = − Δ E p . {\displaystyle W=-\Delta E_{\text{p}}.} These formulas show that work is the energy associated with the action of a force, so work subsequently possesses the physical dimensions , and units, of energy. The work/energy principles discussed here are identical to electric work/energy principles. Constraint forces determine the object's displacement in the system, limiting it within a range. For example, in the case of a slope plus gravity, the object is stuck to the slope and, when attached to a taut string, it cannot move in an outwards direction to make the string any 'tauter'. It eliminates all displacements in that direction, that is, the velocity in the direction of the constraint is limited to 0, so that the constraint forces do not perform work on the system. For a mechanical system , [ 20 ] constraint forces eliminate movement in directions that characterize the constraint. Thus the virtual work done by the forces of constraint is zero, a result which is only true if friction forces are excluded. [ 21 ] Fixed, frictionless constraint forces do not perform work on the system, [ 22 ] as the angle between the motion and the constraint forces is always 90° . [ 22 ] Examples of workless constraints are: rigid interconnections between particles, sliding motion on a frictionless surface, and rolling contact without slipping. [ 23 ] For example, in a pulley system like the Atwood machine , the internal forces on the rope and at the supporting pulley do no work on the system. Therefore, work need only be computed for the gravitational forces acting on the bodies. Another example is the centripetal force exerted inwards by a string on a ball in uniform circular motion sideways constrains the ball to circular motion restricting its movement away from the centre of the circle. This force does zero work because it is perpendicular to the velocity of the ball. The magnetic force on a charged particle is F = q v × B , where q is the charge, v is the velocity of the particle, and B is the magnetic field . The result of a cross product is always perpendicular to both of the original vectors, so F ⊥ v . The dot product of two perpendicular vectors is always zero, so the work W = F ⋅ v = 0 , and the magnetic force does not do work. It can change the direction of motion but never change the speed. For moving objects, the quantity of work/time (power) is integrated along the trajectory of the point of application of the force. Thus, at any instant, the rate of the work done by a force (measured in joules/second, or watts ) is the scalar product of the force (a vector), and the velocity vector of the point of application. This scalar product of force and velocity is known as instantaneous power . Just as velocities may be integrated over time to obtain a total distance, by the fundamental theorem of calculus , the total work along a path is similarly the time-integral of instantaneous power applied along the trajectory of the point of application. [ 24 ] Work is the result of a force on a point that follows a curve X , with a velocity v , at each instant. The small amount of work δW that occurs over an instant of time dt is calculated as δ W = F ⋅ d s = F ⋅ v d t {\displaystyle \delta W=\mathbf {F} \cdot d\mathbf {s} =\mathbf {F} \cdot \mathbf {v} dt} where the F ⋅ v is the power over the instant dt . The sum of these small amounts of work over the trajectory of the point yields the work, W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F ⋅ d s d t d t = ∫ C F ⋅ d s , {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} \,dt=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\tfrac {d\mathbf {s} }{dt}}\,dt=\int _{C}\mathbf {F} \cdot d\mathbf {s} ,} where C is the trajectory from x ( t 1 ) to x ( t 2 ). This integral is computed along the trajectory of the particle, and is therefore said to be path dependent . If the force is always directed along this line, and the magnitude of the force is F , then this integral simplifies to W = ∫ C F d s {\displaystyle W=\int _{C}F\,ds} where s is displacement along the line. If F is constant, in addition to being directed along the line, then the integral simplifies further to W = ∫ C F d s = F ∫ C d s = F s {\displaystyle W=\int _{C}F\,ds=F\int _{C}ds=Fs} where s is the displacement of the point along the line. This calculation can be generalized for a constant force that is not directed along the line, followed by the particle. In this case the dot product F ⋅ d s = F cos θ ds , where θ is the angle between the force vector and the direction of movement, [ 24 ] that is W = ∫ C F ⋅ d s = F s cos ⁡ θ . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {s} =Fs\cos \theta .} When a force component is perpendicular to the displacement of the object (such as when a body moves in a circular path under a central force ), no work is done, since the cosine of 90° is zero. [ 19 ] Thus, no work can be performed by gravity on a planet with a circular orbit (this is ideal, as all orbits are slightly elliptical). Also, no work is done on a body moving circularly at a constant speed while constrained by mechanical force, such as moving at constant speed in a frictionless ideal centrifuge. Calculating the work as "force times straight path segment" would only apply in the most simple of circumstances, as noted above. If force is changing, or if the body is moving along a curved path, possibly rotating and not necessarily rigid, then only the path of the application point of the force is relevant for the work done, and only the component of the force parallel to the application point velocity is doing work (positive work when in the same direction, and negative when in the opposite direction of the velocity). This component of force can be described by the scalar quantity called scalar tangential component ( F cos( θ ) , where θ is the angle between the force and the velocity). And then the most general definition of work can be formulated as follows: If the force varies (e.g. compressing a spring) we need to use calculus to find the work done. If the force as a variable of x is given by F ( x ) , then the work done by the force along the x-axis from x 1 to x 2 is: Thus, the work done for a variable force can be expressed as a definite integral of force over displacement. [ 25 ] If the displacement as a variable of time is given by ∆ x (t) , then work done by the variable force from t 1 to t 2 is: Thus, the work done for a variable force can be expressed as a definite integral of power over time. A force couple results from equal and opposite forces, acting on two different points of a rigid body. The sum (resultant) of these forces may cancel, but their effect on the body is the couple or torque T . The work of the torque is calculated as δ W = T ⋅ ω d t , {\displaystyle \delta W=\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt,} where the T ⋅ ω is the power over the instant dt . The sum of these small amounts of work over the trajectory of the rigid body yields the work, W = ∫ t 1 t 2 T ⋅ ω d t . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt.} This integral is computed along the trajectory of the rigid body with an angular velocity ω that varies with time, and is therefore said to be path dependent . If the angular velocity vector maintains a constant direction, then it takes the form, ω = ϕ ˙ S , {\displaystyle {\boldsymbol {\omega }}={\dot {\phi }}\mathbf {S} ,} where ϕ {\displaystyle \phi } is the angle of rotation about the constant unit vector S . In this case, the work of the torque becomes, W = ∫ t 1 t 2 T ⋅ ω d t = ∫ t 1 t 2 T ⋅ S d ϕ d t d t = ∫ C T ⋅ S d ϕ , {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot \mathbf {S} {\frac {d\phi }{dt}}dt=\int _{C}\mathbf {T} \cdot \mathbf {S} \,d\phi ,} where C is the trajectory from ϕ ( t 1 ) {\displaystyle \phi (t_{1})} to ϕ ( t 2 ) {\displaystyle \phi (t_{2})} . This integral depends on the rotational trajectory ϕ ( t ) {\displaystyle \phi (t)} , and is therefore path-dependent. If the torque τ {\displaystyle \tau } is aligned with the angular velocity vector so that, T = τ S , {\displaystyle \mathbf {T} =\tau \mathbf {S} ,} and both the torque and angular velocity are constant, then the work takes the form, [ 2 ] W = ∫ t 1 t 2 τ ϕ ˙ d t = τ ( ϕ 2 − ϕ 1 ) . {\displaystyle W=\int _{t_{1}}^{t_{2}}\tau {\dot {\phi }}\,dt=\tau (\phi _{2}-\phi _{1}).} This result can be understood more simply by considering the torque as arising from a force of constant magnitude F , being applied perpendicularly to a lever arm at a distance r {\displaystyle r} , as shown in the figure. This force will act through the distance along the circular arc l = s = r ϕ {\displaystyle l=s=r\phi } , so the work done is W = F s = F r ϕ . {\displaystyle W=Fs=Fr\phi .} Introduce the torque τ = Fr , to obtain W = F r ϕ = τ ϕ , {\displaystyle W=Fr\phi =\tau \phi ,} as presented above. Notice that only the component of torque in the direction of the angular velocity vector contributes to the work. The scalar product of a force F and the velocity v of its point of application defines the power input to a system at an instant of time. Integration of this power over the trajectory of the point of application, C = x ( t ) , defines the work input to the system by the force. Therefore, the work done by a force F on an object that travels along a curve C is given by the line integral : W = ∫ C F ⋅ d x = ∫ t 1 t 2 F ⋅ v d t , {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt,} where dx ( t ) defines the trajectory C and v is the velocity along this trajectory. In general this integral requires that the path along which the velocity is defined, so the evaluation of work is said to be path dependent. The time derivative of the integral for work yields the instantaneous power, d W d t = P ( t ) = F ⋅ v . {\displaystyle {\frac {dW}{dt}}=P(t)=\mathbf {F} \cdot \mathbf {v} .} If the work for an applied force is independent of the path, then the work done by the force, by the gradient theorem , defines a potential function which is evaluated at the start and end of the trajectory of the point of application. This means that there is a potential function U ( x ) , that can be evaluated at the two points x ( t 1 ) and x ( t 2 ) to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is W = ∫ C F ⋅ d x = ∫ x ( t 1 ) x ( t 2 ) F ⋅ d x = U ( x ( t 1 ) ) − U ( x ( t 2 ) ) . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{\mathbf {x} (t_{1})}^{\mathbf {x} (t_{2})}\mathbf {F} \cdot d\mathbf {x} =U(\mathbf {x} (t_{1}))-U(\mathbf {x} (t_{2})).} The function U ( x ) is called the potential energy associated with the applied force. The force derived from such a potential function is said to be conservative . Examples of forces that have potential energies are gravity and spring forces. In this case, the gradient of work yields ∇ W = − ∇ U = − ( ∂ U ∂ x , ∂ U ∂ y , ∂ U ∂ z ) = F , {\displaystyle \nabla W=-\nabla U=-\left({\frac {\partial U}{\partial x}},{\frac {\partial U}{\partial y}},{\frac {\partial U}{\partial z}}\right)=\mathbf {F} ,} and the force F is said to be "derivable from a potential." [ 26 ] Because the potential U defines a force F at every point x in space, the set of forces is called a force field . The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity V of the body, that is P ( t ) = − ∇ U ⋅ v = F ⋅ v . {\displaystyle P(t)=-\nabla U\cdot \mathbf {v} =\mathbf {F} \cdot \mathbf {v} .} In the absence of other forces, gravity results in a constant downward acceleration of every freely moving object. Near Earth's surface the acceleration due to gravity is g = 9.8 m⋅s −2 and the gravitational force on an object of mass m is F g = mg . It is convenient to imagine this gravitational force concentrated at the center of mass of the object. If an object with weight mg is displaced upwards or downwards a vertical distance y 2 − y 1 , the work W done on the object is: W = F g ( y 2 − y 1 ) = F g Δ y = m g Δ y {\displaystyle W=F_{g}(y_{2}-y_{1})=F_{g}\Delta y=mg\Delta y} where F g is weight (pounds in imperial units, and newtons in SI units), and Δ y is the change in height y . Notice that the work done by gravity depends only on the vertical movement of the object. The presence of friction does not affect the work done on the object by its weight. The force of gravity exerted by a mass M on another mass m is given by F = − G M m r 2 r ^ = − G M m r 3 r , {\displaystyle \mathbf {F} =-{\frac {GMm}{r^{2}}}{\hat {\mathbf {r} }}=-{\frac {GMm}{r^{3}}}\mathbf {r} ,} where r is the position vector from M to m and r̂ is the unit vector in the direction of r . Let the mass m move at the velocity v ; then the work of gravity on this mass as it moves from position r ( t 1 ) to r ( t 2 ) is given by W = − ∫ r ( t 1 ) r ( t 2 ) G M m r 3 r ⋅ d r = − ∫ t 1 t 2 G M m r 3 r ⋅ v d t . {\displaystyle W=-\int _{\mathbf {r} (t_{1})}^{\mathbf {r} (t_{2})}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot d\mathbf {r} =-\int _{t_{1}}^{t_{2}}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot \mathbf {v} \,dt.} Notice that the position and velocity of the mass m are given by r = r e r , v = d r d t = r ˙ e r + r θ ˙ e t , {\displaystyle \mathbf {r} =r\mathbf {e} _{r},\qquad \mathbf {v} ={\frac {d\mathbf {r} }{dt}}={\dot {r}}\mathbf {e} _{r}+r{\dot {\theta }}\mathbf {e} _{t},} where e r and e t are the radial and tangential unit vectors directed relative to the vector from M to m , and we use the fact that d e r / d t = θ ˙ e t . {\displaystyle d\mathbf {e} _{r}/dt={\dot {\theta }}\mathbf {e} _{t}.} Use this to simplify the formula for work of gravity to, W = − ∫ t 1 t 2 G m M r 3 ( r e r ) ⋅ ( r ˙ e r + r θ ˙ e t ) d t = − ∫ t 1 t 2 G m M r 3 r r ˙ d t = G M m r ( t 2 ) − G M m r ( t 1 ) . {\displaystyle W=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}(r\mathbf {e} _{r})\cdot \left({\dot {r}}\mathbf {e} _{r}+r{\dot {\theta }}\mathbf {e} _{t}\right)dt=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}r{\dot {r}}dt={\frac {GMm}{r(t_{2})}}-{\frac {GMm}{r(t_{1})}}.} This calculation uses the fact that d d t r − 1 = − r − 2 r ˙ = − r ˙ r 2 . {\displaystyle {\frac {d}{dt}}r^{-1}=-r^{-2}{\dot {r}}=-{\frac {\dot {r}}{r^{2}}}.} The function U = − G M m r , {\displaystyle U=-{\frac {GMm}{r}},} is the gravitational potential function, also known as gravitational potential energy . The negative sign follows the convention that work is gained from a loss of potential energy. Consider a spring that exerts a horizontal force F = (− kx , 0, 0) that is proportional to its deflection in the x direction independent of how a body moves. The work of this spring on a body moving along the space with the curve X ( t ) = ( x ( t ), y ( t ), z ( t )) , is calculated using its velocity, v = ( v x , v y , v z ) , to obtain W = ∫ 0 t F ⋅ v d t = − ∫ 0 t k x v x d t = − 1 2 k x 2 . {\displaystyle W=\int _{0}^{t}\mathbf {F} \cdot \mathbf {v} dt=-\int _{0}^{t}kxv_{x}dt=-{\frac {1}{2}}kx^{2}.} For convenience, consider contact with the spring occurs at t = 0 , then the integral of the product of the distance x and the x-velocity, xv x dt , over time t is ⁠ 1 / 2 ⁠ x 2 . The work is the product of the distance times the spring force, which is also dependent on distance; hence the x 2 result. The work W {\displaystyle W} done by a body of gas on its surroundings is: W = ∫ a b P d V {\displaystyle W=\int _{a}^{b}P\,dV} where P is pressure, V is volume, and a and b are initial and final volumes. The principle of work and kinetic energy (also known as the work–energy principle ) states that the work done by all forces acting on a particle (the work of the resultant force) equals the change in the kinetic energy of the particle. [ 27 ] That is, the work W done by the resultant force on a particle equals the change in the particle's kinetic energy E k {\displaystyle E_{\text{k}}} , [ 2 ] W = Δ E k = 1 2 m v 2 2 − 1 2 m v 1 2 {\displaystyle W=\Delta E_{\text{k}}={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}} where v 1 {\displaystyle v_{1}} and v 2 {\displaystyle v_{2}} are the speeds of the particle before and after the work is done, and m is its mass . The derivation of the work–energy principle begins with Newton's second law of motion and the resultant force on a particle. Computation of the scalar product of the force with the velocity of the particle evaluates the instantaneous power added to the system. [ 28 ] (Constraints define the direction of movement of the particle by ensuring there is no component of velocity in the direction of the constraint force. This also means the constraint forces do not add to the instantaneous power.) The time integral of this scalar equation yields work from the instantaneous power, and kinetic energy from the scalar product of acceleration with velocity. The fact that the work–energy principle eliminates the constraint forces underlies Lagrangian mechanics . [ 29 ] This section focuses on the work–energy principle as it applies to particle dynamics. In more general systems work can change the potential energy of a mechanical device, the thermal energy in a thermal system, or the electrical energy in an electrical device. Work transfers energy from one place to another or one form to another. In the case the resultant force F is constant in both magnitude and direction, and parallel to the velocity of the particle, the particle is moving with constant acceleration a along a straight line. [ 30 ] The relation between the net force and the acceleration is given by the equation F = ma ( Newton's second law ), and the particle displacement s can be expressed by the equation s = v 2 2 − v 1 2 2 a {\displaystyle s={\frac {v_{2}^{2}-v_{1}^{2}}{2a}}} which follows from v 2 2 = v 1 2 + 2 a s {\displaystyle v_{2}^{2}=v_{1}^{2}+2as} (see Equations of motion ). The work of the net force is calculated as the product of its magnitude and the particle displacement. Substituting the above equations, one obtains: W = F s = m a s = m a v 2 2 − v 1 2 2 a = 1 2 m v 2 2 − 1 2 m v 1 2 = Δ E k {\displaystyle W=Fs=mas=ma{\frac {v_{2}^{2}-v_{1}^{2}}{2a}}={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}=\Delta E_{\text{k}}} Other derivation: W = F s = m a s = m v 2 2 − v 1 2 2 s s = 1 2 m v 2 2 − 1 2 m v 1 2 = Δ E k {\displaystyle W=Fs=mas=m{\frac {v_{2}^{2}-v_{1}^{2}}{2s}}s={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}=\Delta E_{\text{k}}} In the general case of rectilinear motion, when the net force F is not constant in magnitude, but is constant in direction, and parallel to the velocity of the particle, the work must be integrated along the path of the particle: W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F v d t = ∫ t 1 t 2 m a v d t = m ∫ t 1 t 2 v d v d t d t = m ∫ v 1 v 2 v d v = 1 2 m ( v 2 2 − v 1 2 ) . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt=\int _{t_{1}}^{t_{2}}F\,v\,dt=\int _{t_{1}}^{t_{2}}ma\,v\,dt=m\int _{t_{1}}^{t_{2}}v\,{\frac {dv}{dt}}\,dt=m\int _{v_{1}}^{v_{2}}v\,dv={\tfrac {1}{2}}m\left(v_{2}^{2}-v_{1}^{2}\right).} For any net force acting on a particle moving along any curvilinear path, it can be demonstrated that its work equals the change in the kinetic energy of the particle by a simple derivation analogous to the equation above. It is known as the work–energy principle : W = ∫ t 1 t 2 F ⋅ v d t = m ∫ t 1 t 2 a ⋅ v d t = m 2 ∫ t 1 t 2 d v 2 d t d t = m 2 ∫ v 1 2 v 2 2 d v 2 = m v 2 2 2 − m v 1 2 2 = Δ E k {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt=m\int _{t_{1}}^{t_{2}}\mathbf {a} \cdot \mathbf {v} dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {dv^{2}}{dt}}\,dt={\frac {m}{2}}\int _{v_{1}^{2}}^{v_{2}^{2}}dv^{2}={\frac {mv_{2}^{2}}{2}}-{\frac {mv_{1}^{2}}{2}}=\Delta E_{\text{k}}} The identity a ⋅ v = 1 2 d v 2 d t {\textstyle \mathbf {a} \cdot \mathbf {v} ={\frac {1}{2}}{\frac {dv^{2}}{dt}}} requires some algebra. From the identity v 2 = v ⋅ v {\textstyle v^{2}=\mathbf {v} \cdot \mathbf {v} } and definition a = d v d t {\textstyle \mathbf {a} ={\frac {d\mathbf {v} }{dt}}} it follows d v 2 d t = d ( v ⋅ v ) d t = d v d t ⋅ v + v ⋅ d v d t = 2 d v d t ⋅ v = 2 a ⋅ v . {\displaystyle {\frac {dv^{2}}{dt}}={\frac {d(\mathbf {v} \cdot \mathbf {v} )}{dt}}={\frac {d\mathbf {v} }{dt}}\cdot \mathbf {v} +\mathbf {v} \cdot {\frac {d\mathbf {v} }{dt}}=2{\frac {d\mathbf {v} }{dt}}\cdot \mathbf {v} =2\mathbf {a} \cdot \mathbf {v} .} The remaining part of the above derivation is just simple calculus, same as in the preceding rectilinear case. In particle dynamics, a formula equating work applied to a system to its change in kinetic energy is obtained as a first integral of Newton's second law of motion . It is useful to notice that the resultant force used in Newton's laws can be separated into forces that are applied to the particle and forces imposed by constraints on the movement of the particle. Remarkably, the work of a constraint force is zero, therefore only the work of the applied forces need be considered in the work–energy principle. To see this, consider a particle P that follows the trajectory X ( t ) with a force F acting on it. Isolate the particle from its environment to expose constraint forces R , then Newton's Law takes the form F + R = m X ¨ , {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }},} where m is the mass of the particle. Note that n dots above a vector indicates its nth time derivative . The scalar product of each side of Newton's law with the velocity vector yields F ⋅ X ˙ = m X ¨ ⋅ X ˙ , {\displaystyle \mathbf {F} \cdot {\dot {\mathbf {X} }}=m{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }},} because the constraint forces are perpendicular to the particle velocity. Integrate this equation along its trajectory from the point X ( t 1 ) to the point X ( t 2 ) to obtain ∫ t 1 t 2 F ⋅ X ˙ d t = m ∫ t 1 t 2 X ¨ ⋅ X ˙ d t . {\displaystyle \int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\dot {\mathbf {X} }}dt=m\int _{t_{1}}^{t_{2}}{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}dt.} The left side of this equation is the work of the applied force as it acts on the particle along the trajectory from time t 1 to time t 2 . This can also be written as W = ∫ t 1 t 2 F ⋅ X ˙ d t = ∫ X ( t 1 ) X ( t 2 ) F ⋅ d X . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\dot {\mathbf {X} }}dt=\int _{\mathbf {X} (t_{1})}^{\mathbf {X} (t_{2})}\mathbf {F} \cdot d\mathbf {X} .} This integral is computed along the trajectory X ( t ) of the particle and is therefore path dependent. The right side of the first integral of Newton's equations can be simplified using the following identity 1 2 d d t ( X ˙ ⋅ X ˙ ) = X ¨ ⋅ X ˙ , {\displaystyle {\frac {1}{2}}{\frac {d}{dt}}({\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }})={\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }},} (see product rule for derivation). Now it is integrated explicitly to obtain the change in kinetic energy, Δ K = m ∫ t 1 t 2 X ¨ ⋅ X ˙ d t = m 2 ∫ t 1 t 2 d d t ( X ˙ ⋅ X ˙ ) d t = m 2 X ˙ ⋅ X ˙ ( t 2 ) − m 2 X ˙ ⋅ X ˙ ( t 1 ) = 1 2 m Δ v 2 , {\displaystyle \Delta K=m\int _{t_{1}}^{t_{2}}{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {d}{dt}}({\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }})dt={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}(t_{2})-{\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}(t_{1})={\frac {1}{2}}m\Delta \mathbf {v} ^{2},} where the kinetic energy of the particle is defined by the scalar quantity, K = m 2 X ˙ ⋅ X ˙ = 1 2 m v 2 {\displaystyle K={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}={\frac {1}{2}}m{\mathbf {v} ^{2}}} It is useful to resolve the velocity and acceleration vectors into tangential and normal components along the trajectory X ( t ) , such that X ˙ = v T and X ¨ = v ˙ T + v 2 κ N , {\displaystyle {\dot {\mathbf {X} }}=v\mathbf {T} \quad {\text{and}}\quad {\ddot {\mathbf {X} }}={\dot {v}}\mathbf {T} +v^{2}\kappa \mathbf {N} ,} where v = | X ˙ | = X ˙ ⋅ X ˙ . {\displaystyle v=|{\dot {\mathbf {X} }}|={\sqrt {{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}}}.} Then, the scalar product of velocity with acceleration in Newton's second law takes the form Δ K = m ∫ t 1 t 2 v ˙ v d t = m 2 ∫ t 1 t 2 d d t v 2 d t = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) , {\displaystyle \Delta K=m\int _{t_{1}}^{t_{2}}{\dot {v}}v\,dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {d}{dt}}v^{2}\,dt={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}),} where the kinetic energy of the particle is defined by the scalar quantity, K = m 2 v 2 = m 2 X ˙ ⋅ X ˙ . {\displaystyle K={\frac {m}{2}}v^{2}={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}.} The result is the work–energy principle for particle dynamics, W = Δ K . {\displaystyle W=\Delta K.} This derivation can be generalized to arbitrary rigid body systems. Consider the case of a vehicle moving along a straight horizontal trajectory under the action of a driving force and gravity that sum to F . The constraint forces between the vehicle and the road define R , and we have F + R = m X ¨ . {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }}.} For convenience let the trajectory be along the X-axis, so X = ( d , 0) and the velocity is V = ( v , 0) , then R ⋅ V = 0 , and F ⋅ V = F x v , where F x is the component of F along the X-axis, so F x v = m v ˙ v . {\displaystyle F_{x}v=m{\dot {v}}v.} Integration of both sides yields ∫ t 1 t 2 F x v d t = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) . {\displaystyle \int _{t_{1}}^{t_{2}}F_{x}vdt={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}).} If F x is constant along the trajectory, then the integral of velocity is distance, so F x ( d ( t 2 ) − d ( t 1 ) ) = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) . {\displaystyle F_{x}(d(t_{2})-d(t_{1}))={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}).} As an example consider a car skidding to a stop, where k is the coefficient of friction and w is the weight of the car. Then the force along the trajectory is F x = − kw . The velocity v of the car can be determined from the length s of the skid using the work–energy principle, k w s = w 2 g v 2 , or v = 2 k s g . {\displaystyle kws={\frac {w}{2g}}v^{2},\quad {\text{or}}\quad v={\sqrt {2ksg}}.} This formula uses the fact that the mass of the vehicle is m = w / g . Consider the case of a vehicle that starts at rest and coasts down an inclined surface (such as mountain road), the work–energy principle helps compute the minimum distance that the vehicle travels to reach a velocity V , of say 60 mph (88 fps). Rolling resistance and air drag will slow the vehicle down so the actual distance will be greater than if these forces are neglected. Let the trajectory of the vehicle following the road be X ( t ) which is a curve in three-dimensional space. The force acting on the vehicle that pushes it down the road is the constant force of gravity F = (0, 0, w ) , while the force of the road on the vehicle is the constraint force R . Newton's second law yields, F + R = m X ¨ . {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }}.} The scalar product of this equation with the velocity, V = ( v x , v y , v z ) , yields w v z = m V ˙ V , {\displaystyle wv_{z}=m{\dot {V}}V,} where V is the magnitude of V . The constraint forces between the vehicle and the road cancel from this equation because R ⋅ V = 0 , which means they do no work. Integrate both sides to obtain ∫ t 1 t 2 w v z d t = m 2 V 2 ( t 2 ) − m 2 V 2 ( t 1 ) . {\displaystyle \int _{t_{1}}^{t_{2}}wv_{z}dt={\frac {m}{2}}V^{2}(t_{2})-{\frac {m}{2}}V^{2}(t_{1}).} The weight force w is constant along the trajectory and the integral of the vertical velocity is the vertical distance, therefore, w Δ z = m 2 V 2 . {\displaystyle w\Delta z={\frac {m}{2}}V^{2}.} Recall that V( t 1 )=0. Notice that this result does not depend on the shape of the road followed by the vehicle. In order to determine the distance along the road assume the downgrade is 6%, which is a steep road. This means the altitude decreases 6 feet for every 100 feet traveled—for angles this small the sin and tan functions are approximately equal. Therefore, the distance s in feet down a 6% grade to reach the velocity V is at least s = Δ z 0.06 = 8.3 V 2 g , or s = 8.3 88 2 32.2 ≈ 2000 f t . {\displaystyle s={\frac {\Delta z}{0.06}}=8.3{\frac {V^{2}}{g}},\quad {\text{or}}\quad s=8.3{\frac {88^{2}}{32.2}}\approx 2000\mathrm {ft} .} This formula uses the fact that the weight of the vehicle is w = mg . The work of forces acting at various points on a single rigid body can be calculated from the work of a resultant force and torque . To see this, let the forces F 1 , F 2 , ..., F n act on the points X 1 , X 2 , ..., X n in a rigid body. The trajectories of X i , i = 1, ..., n are defined by the movement of the rigid body. This movement is given by the set of rotations [ A ( t )] and the trajectory d ( t ) of a reference point in the body. Let the coordinates x i i = 1, ..., n define these points in the moving rigid body's reference frame M , so that the trajectories traced in the fixed frame F are given by X i ( t ) = [ A ( t ) ] x i + d ( t ) i = 1 , … , n . {\displaystyle \mathbf {X} _{i}(t)=[A(t)]\mathbf {x} _{i}+\mathbf {d} (t)\quad i=1,\ldots ,n.} The velocity of the points X i along their trajectories are V i = ω × ( X i − d ) + d ˙ , {\displaystyle \mathbf {V} _{i}={\boldsymbol {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+{\dot {\mathbf {d} }},} where ω is the angular velocity vector obtained from the skew symmetric matrix [ Ω ] = A ˙ A T , {\displaystyle [\Omega ]={\dot {A}}A^{\mathsf {T}},} known as the angular velocity matrix. The small amount of work by the forces over the small displacements δ r i can be determined by approximating the displacement by δ r = v δt so δ W = F 1 ⋅ V 1 δ t + F 2 ⋅ V 2 δ t + … + F n ⋅ V n δ t {\displaystyle \delta W=\mathbf {F} _{1}\cdot \mathbf {V} _{1}\delta t+\mathbf {F} _{2}\cdot \mathbf {V} _{2}\delta t+\ldots +\mathbf {F} _{n}\cdot \mathbf {V} _{n}\delta t} or δ W = ∑ i = 1 n F i ⋅ ( ω × ( X i − d ) + d ˙ ) δ t . {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot ({\boldsymbol {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+{\dot {\mathbf {d} }})\delta t.} This formula can be rewritten to obtain δ W = ( ∑ i = 1 n F i ) ⋅ d ˙ δ t + ( ∑ i = 1 n ( X i − d ) × F i ) ⋅ ω δ t = ( F ⋅ d ˙ + T ⋅ ω ) δ t , {\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot {\dot {\mathbf {d} }}\delta t+\left(\sum _{i=1}^{n}\left(\mathbf {X} _{i}-\mathbf {d} \right)\times \mathbf {F} _{i}\right)\cdot {\boldsymbol {\omega }}\delta t=\left(\mathbf {F} \cdot {\dot {\mathbf {d} }}+\mathbf {T} \cdot {\boldsymbol {\omega }}\right)\delta t,} where F and T are the resultant force and torque applied at the reference point d of the moving frame M in the rigid body.
https://en.wikipedia.org/wiki/Work_(physics)
Thermodynamic work is one of the principal kinds of process by which a thermodynamic system can interact with and transfer energy to its surroundings. This results in externally measurable macroscopic forces on the system's surroundings, which can cause mechanical work , to lift a weight, for example, [ 1 ] or cause changes in electromagnetic, [ 2 ] [ 3 ] [ 4 ] or gravitational [ 5 ] variables. Also, the surroundings can perform thermodynamic work on a thermodynamic system, which is measured by an opposite sign convention. For thermodynamic work, appropriately chosen externally measured quantities are exactly matched by values of or contributions to changes in macroscopic internal state variables of the system, which always occur in conjugate pairs, for example pressure and volume [ 1 ] or magnetic flux density and magnetization. [ 3 ] In the International System of Units (SI), work is measured in joules (symbol J). The rate at which work is performed is power , measured in joules per second, and denoted with the unit watt (W). Work, i.e. "weight lifted through a height", was originally defined in 1824 by Sadi Carnot in his famous paper Reflections on the Motive Power of Fire , where he used the term motive power for work. Specifically, according to Carnot: We use here motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised. In 1845, the English physicist James Joule wrote a paper On the mechanical equivalent of heat for the British Association meeting in Cambridge . [ 6 ] In this paper, he reported his best-known experiment, in which the mechanical power released through the action of a "weight falling through a height" was used to turn a paddle-wheel in an insulated barrel of water. In this experiment, the motion of the paddle wheel, through agitation and friction , heated the body of water, so as to increase its temperature . Both the temperature change Δ T {\displaystyle \Delta T} of the water and the height of the fall Δ h {\displaystyle \Delta h} of the weight m g {\displaystyle mg} were recorded. Using these values, Joule was able to determine the mechanical equivalent of heat . Joule estimated a mechanical equivalent of heat to be 819 ft•lbf/Btu (4.41 J/cal). The modern day definitions of heat, work, temperature, and energy all have connection to this experiment. In this arrangement of apparatus, it never happens that the process runs in reverse, with the water driving the paddles so as to raise the weight, not even slightly. Mechanical work was done by the apparatus of falling weight, pulley, and paddles, which lay in the surroundings of the water. Their motion scarcely affected the volume of the water. A quantity of mechanical work, measured as force × distance in the surroundings, that does not change the volume of the water, is said to be isochoric. Such work reaches the system only as friction, through microscopic modes, and is irreversible. It does not count as thermodynamic work. The energy supplied by the fall of the weight passed into the water as heat. A fundamental guiding principle of thermodynamics is the conservation of energy. The total energy of a system is the sum of its internal energy, of its potential energy as a whole system in an external force field, such as gravity, and of its kinetic energy as a whole system in motion. Thermodynamics has special concern with transfers of energy, from a body of matter, such as, for example a cylinder of steam, to the surroundings of the body, by mechanisms through which the body exerts macroscopic forces on its surroundings so as to lift a weight there; such mechanisms are the ones that are said to mediate thermodynamic work. Besides transfer of energy as work, thermodynamics admits transfer of energy as heat . For a process in a closed (no transfer of matter) thermodynamic system, the first law of thermodynamics relates changes in the internal energy (or other cardinal energy function , depending on the conditions of the transfer) of the system to those two modes of energy transfer, as work, and as heat. Adiabatic work is done without matter transfer and without heat transfer. In principle, in thermodynamics, for a process in a closed system, the quantity of heat transferred is defined by the amount of adiabatic work that would be needed to effect the change in the system that is occasioned by the heat transfer. In experimental practice, heat transfer is often estimated calorimetrically, through change of temperature of a known quantity of calorimetric material substance. Energy can also be transferred to or from a system through transfer of matter. The possibility of such transfer defines the system as an open system, as opposed to a closed system. By definition, such transfer is neither as work nor as heat. Changes in the potential energy of a body as a whole with respect to forces in its surroundings, and in the kinetic energy of the body moving as a whole with respect to its surroundings, are by definition excluded from the body's cardinal energy (examples are internal energy and enthalpy). In the surroundings of a thermodynamic system, external to it, all the various mechanical and non-mechanical macroscopic forms of work can be converted into each other with no limitation in principle due to the laws of thermodynamics, so that the energy conversion efficiency can approach 100% in some cases; such conversion is required to be frictionless, and consequently adiabatic . [ 7 ] In particular, in principle, all macroscopic forms of work can be converted into the mechanical work of lifting a weight, which was the original form of thermodynamic work considered by Carnot and Joule (see History section above). Some authors have considered this equivalence to the lifting of a weight as a defining characteristic of work. [ 8 ] [ 9 ] [ 10 ] [ 11 ] For example, with the apparatus of Joule's experiment in which, through pulleys, a weight descending in the surroundings drives the stirring of a thermodynamic system, the descent of the weight can be diverted by a re-arrangement of pulleys, so that it lifts another weight in the surroundings, instead of stirring the thermodynamic system. Such conversion may be idealized as nearly frictionless, though it occurs relatively quickly. It usually comes about through devices that are not simple thermodynamic systems (a simple thermodynamic system is a homogeneous body of material substances). For example, the descent of the weight in Joule's stirring experiment reduces the weight's total energy. It is described as loss of gravitational potential energy by the weight, due to change of its macroscopic position in the gravity field, in contrast to, for example, loss of the weight's internal energy due to changes in its entropy, volume, and chemical composition. Though it occurs relatively rapidly, because the energy remains nearly fully available as work in one way or another, such diversion of work in the surroundings may be idealized as nearly reversible, or nearly perfectly efficient. In contrast, the conversion of heat into work in a heat engine can never exceed the Carnot efficiency , as a consequence of the second law of thermodynamics . Such energy conversion, through work done relatively rapidly, in a practical heat engine, by a thermodynamic system on its surroundings, cannot be idealized, not even nearly, as reversible. Thermodynamic work done by a thermodynamic system on its surroundings is defined so as to comply with this principle. Historically, thermodynamics was about how a thermodynamic system could do work on its surroundings. Thermodynamic work and ordinary mechanical work are to be distinguished. Thermodynamic work is defined by the changes of the thermodynamic system's own internal state variables, such as volume, electric polarization, and magnetization, but excluding temperature and entropy. [ 12 ] [ 13 ] [ 14 ] Ordinary mechanical work includes work done by compression , as well as shaft work, stirring, and rubbing; but shaft work, stirring, and rubbing are not thermodynamic work in so far as they do not change the volume of the system, though they change the temperature or entropy of the system. Work without change of volume is known as isochoric work, for example when friction acts on the surface or in the interior of the system. In a process of transfer of energy from or to a thermodynamic system, the change of internal energy of the system is defined in theory by the amount of adiabatic work that would have been necessary to reach the final from the initial state, such adiabatic work being measurable only through the externally measurable mechanical or deformation variables of the system, that provide full information about the forces exerted by the surroundings on the system during the process. In the case of some of Joule's measurements, the process was so arranged that some heating that occurred outside the system (in the substance of the paddles) by the frictional process also led to heat transfer from the paddles into the system during the process, so that the quantity of work done by the surrounds on the system could be calculated as shaft work, an external mechanical variable. [ 15 ] [ 16 ] The amount of energy transferred as thermodynamic work is measured through quantities defined externally to the system of interest, that belong to its surroundings, but that are matched by internal state variables of the system, such as pressure. In an important sign convention, preferred in chemistry and by many physicists, thermodynamic work that adds to the internal energy of the system is counted as positive. On the other hand, for historical reasons, an oft-encountered sign convention, preferred in physics, is to consider thermodynamic work done by the system on its surroundings as positive. Transfer of thermal energy through direct contact between a closed system and its surroundings, is by the microscopic thermal motions of particles and their associated inter-molecular potential energies. [ 17 ] The microscopic description of such processes are the province of statistical mechanics, not of macroscopic thermodynamics. Another kind of energy transfer is by radiation, performing work on the system. [ 18 ] [ 19 ] Radiative transfer of energy is irreversible in the sense that it occurs only from a hotter to a colder system. There are several forms of dissipative transduction of energy that can occur internally within a system at a microscopic level, such as friction including bulk and shear viscosity [ 20 ] chemical reaction , [ 2 ] unconstrained expansion as in Joule expansion and in diffusion , and phase change . [ 2 ] For an open system, the first law of thermodynamics admits three forms of energy transfer, as work, as heat, and as energy associated with matter that is transferred. The latter cannot be split uniquely into heat and work components. One-way convection of internal energy is a form a transport of energy but is not, as sometimes mistakenly supposed (a relic of the caloric theory of heat), transfer of energy as heat, because one-way convection is transfer of matter; nor is it transfer of energy as work. Nevertheless, if the wall between the system and its surroundings is thick and contains fluid, in the presence of a gravitational field, convective circulation within the wall can be considered as indirectly mediating transfer of energy as heat between the system and its surroundings, though the source and destination of the transferred energy are not in direct contact. For purposes of theoretical calculations about a thermodynamic system, one can imagine fictive idealized thermodynamic "processes" that occur so slowly that they do not incur friction within or on the surface of system; they can then be regarded as virtually reversible. These fictive processes proceed along paths on geometrical surfaces that are described exactly by a characteristic equation of the thermodynamic system. Those geometrical surfaces are the loci of possible states of thermodynamic equilibrium for the system. Really possible thermodynamic processes, occurring at practical rates, even when they occur only by work assessed in the surroundings as adiabatic, without heat transfer, always incur friction within the system, and so are always irreversible. The paths of such really possible processes always depart from those geometrical characteristic surfaces. Even when they occur only by work assessed in the surroundings as adiabatic, without heat transfer, such departures always entail entropy production. The definition of thermodynamic work is in terms of the changes of the system's extensive deformation [ 21 ] (and chemical constitutive and certain other) state variables, such as volume, molar chemical constitution, or electric polarisation. Examples of state variables that are not extensive deformation or other such variables are temperature T and entropy S , as for example in the expression U = U ( S , V , { N j }) . Changes of such variables are not actually physically measureable by use of a single simple adiabatic thermodynamic process; they are processes that occur neither by thermodynamic work nor by transfer of matter, and therefore are said occur by heat transfer. The quantity of thermodynamic work is defined as work done by the system on its surroundings. According to the second law of thermodynamics , such work is irreversible. To get an actual and precise physical measurement of a quantity of thermodynamic work, it is necessary to take account of the irreversibility by restoring the system to its initial condition by running a cycle, for example a Carnot cycle, that includes the target work as a step. The work done by the system on its surroundings is calculated from the quantities that constitute the whole cycle. [ 22 ] A different cycle would be needed to actually measure the work done by the surroundings on the system. This is a reminder that rubbing the surface of a system appears to the rubbing agent in the surroundings as mechanical, though not thermodynamic, work done on the system, not as heat, but appears to the system as heat transferred to the system, not as thermodynamic work. The production of heat by rubbing is irreversible; [ 23 ] historically, it was a piece of evidence for the rejection of the caloric theory of heat as a conserved substance. [ 24 ] The irreversible process known as Joule heating also occurs through a change of a non-deformation extensive state variable. Accordingly, in the opinion of Lavenda, work is not as primitive concept as is heat, which can be measured by calorimetry. [ 25 ] This opinion does not negate the now customary thermodynamic definition of heat in terms of adiabatic work. Known as a thermodynamic operation , the initiating factor of a thermodynamic process is, in many cases, a change in the permeability of a wall between the system and the surroundings. Rubbing is not a change in wall permeability. Kelvin's statement of the second law of thermodynamics uses the notion of an "inanimate material agency"; this notion is sometimes regarded as puzzling. [ 26 ] The triggering of a process of rubbing can occur only in the surroundings, not in a thermodynamic system in its own state of internal thermodynamic equilibrium. Such triggering may be described as a thermodynamic operation. In thermodynamics, the quantity of work done by a closed system on its surroundings is defined by factors strictly confined to the interface of the surroundings with the system and to the surroundings of the system, for example, an extended gravitational field in which the system sits, that is to say, to things external to the system. A main concern of thermodynamics is the properties of materials. Thermodynamic work is defined for the purposes of thermodynamic calculations about bodies of material, known as thermodynamic systems. Consequently, thermodynamic work is defined in terms of quantities that describe the states of materials, which appear as the usual thermodynamic state variables, such as volume, pressure, temperature, chemical composition, and electric polarization. For example, to measure the pressure inside a system from outside it, the observer needs the system to have a wall that can move by a measurable amount in response to pressure differences between the interior of the system and the surroundings. In this sense, part of the definition of a thermodynamic system is the nature of the walls that confine it. Several kinds of thermodynamic work are especially important. One simple example is pressure–volume work. The pressure of concern is that exerted by the surroundings on the surface of the system, and the volume of interest is the negative of the increment of volume gained by the system from the surroundings. It is usually arranged that the pressure exerted by the surroundings on the surface of the system is well defined and equal to the pressure exerted by the system on the surroundings. This arrangement for transfer of energy as work can be varied in a particular way that depends on the strictly mechanical nature of pressure–volume work. The variation consists in letting the coupling between the system and surroundings be through a rigid rod that links pistons of different areas for the system and surroundings. Then for a given amount of work transferred, the exchange of volumes involves different pressures, inversely with the piston areas, for mechanical equilibrium . This cannot be done for the transfer of energy as heat because of its non-mechanical nature. [ 27 ] Another important kind of work is isochoric work, i.e., work that involves no eventual overall change of volume of the system between the initial and the final states of the process. Examples are friction on the surface of the system as in Rumford's experiment; shaft work such as in Joule's experiments; stirring of the system by a magnetic paddle inside it, driven by a moving magnetic field from the surroundings; and vibrational action on the system that leaves its eventual volume unchanged, but involves friction within the system. Isochoric mechanical work for a body in its own state of internal thermodynamic equilibrium is done only by the surroundings on the body, not by the body on the surroundings, so that the sign of isochoric mechanical work with the physics sign convention is always negative. When work, for example pressure–volume work, is done on its surroundings by a closed system that cannot pass heat in or out because it is confined by an adiabatic wall, the work is said to be adiabatic for the system as well as for the surroundings. When mechanical work is done on such an adiabatically enclosed system by the surroundings, it can happen that friction in the surroundings is negligible, for example in the Joule experiment with the falling weight driving paddles that stir the system. Such work is adiabatic for the surroundings, even though it is associated with friction within the system. Such work may or may not be isochoric for the system, depending on the system and its confining walls. If it happens to be isochoric for the system (and does not eventually change other system state variables such as magnetization), it appears as a heat transfer to the system, and does not appear to be adiabatic for the system. In the early history of thermodynamics, a positive amount of work done by the system on the surroundings leads to energy being lost from the system. This historical sign convention has been used in many physics textbooks and is used in the present article. [ 28 ] According to the first law of thermodynamics for a closed system, any net change in the internal energy U must be fully accounted for, in terms of heat Q entering the system and work W done by the system: [ 17 ] An alternate sign convention is to consider the work performed on the system by its surroundings as positive. This leads to a change in sign of the work, so that Δ U = Q + W {\displaystyle \Delta U=Q+W} . This convention has historically been used in chemistry, and has been adopted by most physics textbooks. [ 28 ] [ 30 ] [ 31 ] [ 32 ] This equation reflects the fact that the heat transferred and the work done are not properties of the state of the system. Given only the initial state and the final state of the system, one can only say what the total change in internal energy was, not how much of the energy went out as heat, and how much as work. This can be summarized by saying that heat and work are not state functions of the system. [ 17 ] This is in contrast to classical mechanics, where net work exerted by a particle is a state function. Pressure–volume work (or PV or P - V work) occurs when the volume V of a system changes. PV work is often measured in units of litre-atmospheres where 1 L·atm = 101.325 J . However, the litre-atmosphere is not a recognized unit in the SI system of units, which measures P in pascals (Pa), V in m 3 , and PV in joules (J), where 1 J = 1 Pa·m 3 . PV work is an important topic in chemical thermodynamics . For a process in a closed system , occurring slowly enough for accurate definition of the pressure on the inside of the system's wall that moves and transmits force to the surroundings, described as quasi-static , [ 33 ] [ 34 ] work is represented by the following equation between differentials : δ W = P d V {\displaystyle \delta W=P\,dV} where Moreover, W = ∫ V i V f P d V . {\displaystyle W=\int _{V_{i}}^{V_{f}}P\,dV.} where W {\displaystyle W} denotes the work done by the system during the whole of the reversible process. The first law of thermodynamics can then be expressed as [ 17 ] d U = δ Q − P d V . {\displaystyle dU=\delta Q-PdV\,.} (In the alternative sign convention where W = work done on the system, δ W = − P d V {\displaystyle \delta W=-P\,dV} . However, d U = δ Q − P d V {\displaystyle dU=\delta Q-P\,dV} is unchanged.) PV work is path-dependent and is, therefore, a thermodynamic process function . In general, the term P d V {\displaystyle P\,dV} is not an exact differential . [ 36 ] The statement that a process is quasi-static gives important information about the process but does not determine the P–V path uniquely, because the path can include several slow goings backwards and forward in volume, slowly enough to exclude friction within the system occasioned by departure from the quasi-static requirement. An adiabatic wall is one that does not permit passage of energy by conduction or radiation. The first law of thermodynamics states that Δ U = Q − W {\displaystyle \Delta U=Q-W} . For a quasi-static adiabatic process, δ Q = 0 {\displaystyle \delta Q=0} so that Q = ∫ δ Q = 0. {\displaystyle Q=\int \delta Q=0.} Also δ W = P d V {\displaystyle \delta W=PdV} so that W = ∫ δ W = ∫ P d V . {\displaystyle W=\int \delta W=\int P\,dV.} It follows that d U = − δ W {\displaystyle dU=-\delta W} so that Δ U = − ∫ P d V . {\displaystyle \Delta U=-\int P\,dV.} Internal energy is a state function so its change depends only on the initial and final states of a process. For a quasi-static adiabatic process, the change in internal energy is equal to minus the integral amount of work done by the system, so the work also depends only on the initial and final states of the process and is one and the same for every intermediate path. As a result, the work done by the system also depends on the initial and final states. If the process path is other than quasi-static and adiabatic, there are indefinitely many different paths, with significantly different work amounts, between the initial and final states. (Again the internal energy change depends only on the initial and final states as it is a state function ). In the current mathematical notation, the differential δ W {\displaystyle \delta W} is an inexact differential . [ 17 ] In another notation, δ W is written đ W (with a horizontal line through the d). This notation indicates that đ W is not an exact one-form . The line-through is merely a flag to warn us there is actually no function ( 0-form ) W which is the potential of đ W . If there were, indeed, this function W , we should be able to just use Stokes Theorem to evaluate this putative function, the potential of đ W , at the boundary of the path, that is, the initial and final points, and therefore the work would be a state function. This impossibility is consistent with the fact that it does not make sense to refer to the work on a point in the PV diagram; work presupposes a path. There are several ways of doing mechanical work, each in some way related to a force acting through a distance. [ 37 ] In basic mechanics, the work done by a constant force F on a body displaced a distance s in the direction of the force is given by If the force is not constant, the work done is obtained by integrating the differential amount of work, Energy transmission with a rotating shaft is very common in engineering practice. Often the torque T applied to the shaft is constant which means that the force F applied is constant. For a specified constant torque, the work done during n revolutions is determined as follows: A force F acting through a moment arm r generates a torque T This force acts through a distance s , which is related to the radius r by The shaft work is then determined from: The power transmitted through the shaft is the shaft work done per unit time, which is expressed as When a force is applied on a spring, and the length of the spring changes by a differential amount dx , the work done is For linear elastic springs, the displacement x is proportional to the force applied where K is the spring constant and has the unit of N/m. The displacement x is measured from the undisturbed position of the spring (that is, X = 0 when F = 0 ). Substituting the two equations where x 1 and x 2 are the initial and the final displacement of the spring respectively, measured from the undisturbed position of the spring. Solids are often modeled as linear springs because under the action of a force they contract or elongate, and when the force is lifted, they return to their original lengths, like a spring. This is true as long as the force is in the elastic range, that is, not large enough to cause permanent or plastic deformation. Therefore, the equations given for a linear spring can also be used for elastic solid bars. Alternately, we can determine the work associated with the expansion or contraction of an elastic solid bar by replacing the pressure P by its counterpart in solids, normal stress σ = F / A in the work expansion where A is the cross sectional area of the bar. Consider a liquid film such as a soap film suspended on a wire frame. Some force is required to stretch this film by the movable portion of the wire frame. This force is used to overcome the microscopic forces between molecules at the liquid-air interface. These microscopic forces are perpendicular to any line in the surface and the force generated by these forces per unit length is called the surface tension σ whose unit is N/m. Therefore, the work associated with the stretching of a film is called surface tension work, and is determined from where dA =2 b dx is the change in the surface area of the film. The factor 2 is due to the fact that the film has two surfaces in contact with air. The force acting on the moveable wire as a result of surface tension effects is F = 2 b σ , where σ is the surface tension force per unit length. The amount of useful work which may be extracted from a thermodynamic system is determined by the second law of thermodynamics . Under many practical situations this can be represented by the thermodynamic availability, or Exergy , function. Two important cases are: in thermodynamic systems where the temperature and volume are held constant, the measure of useful work attainable is the Helmholtz free energy function; and in systems where the temperature and pressure are held constant, the measure of useful work attainable is the Gibbs free energy . Non-mechanical work in thermodynamics is work caused by external force fields that a system is exposed to. The action of such forces can be initiated by events in the surroundings of the system, or by thermodynamic operations on the shielding walls of the system. The non-mechanical work of force fields can have either positive or negative sign, work being done by the system on the surroundings, or vice versa . Work done by force fields can be done indefinitely slowly, so as to approach the fictive reversible quasi-static ideal, in which entropy is not created in the system by the process. In thermodynamics, non-mechanical work is to be contrasted with mechanical work that is done by forces in immediate contact between the system and its surroundings. If the putative 'work' of a process cannot be defined as either long-range work or else as contact work, then sometimes it cannot be described by the thermodynamic formalism as work at all. Nevertheless, the thermodynamic formalism allows that energy can be transferred between an open system and its surroundings by processes for which work is not defined. An example is when the wall between the system and its surrounds is not considered as idealized and vanishingly thin, so that processes can occur within the wall, such as friction affecting the transfer of matter across the wall; in this case, the forces of transfer are neither strictly long-range nor strictly due to contact between the system and its surroundings; the transfer of energy can then be considered as convection, and assessed in sum just as transfer of internal energy. This is conceptually different from transfer of energy as heat through a thick fluid-filled wall in the presence of a gravitational field, between a closed system and its surroundings; in this case there may convective circulation within the wall but the process may still be considered as transfer of energy as heat between the system and its surroundings; if the whole wall is moved by the application of force from the surroundings, without change of volume of the wall, so as to change the volume of the system, then it is also at the same time transferring energy as work. A chemical reaction within a system can lead to electrical long-range forces and to electric current flow, which transfer energy as work between system and surroundings, though the system's chemical reactions themselves (except for the special limiting case in which in they are driven through devices in the surroundings so as to occur along a line of thermodynamic equilibrium) are always irreversible and do not directly interact with the surroundings of the system. [ 38 ] Non-mechanical work contrasts with pressure–volume work. Pressure–volume work is one of the two mainly considered kinds of mechanical contact work. A force acts on the interfacing wall between system and surroundings. The force is due to the pressure exerted on the interfacing wall by the material inside the system; that pressure is an internal state variable of the system, but is properly measured by external devices at the wall. The work is due to change of system volume by expansion or contraction of the system. If the system expands, in the present article it is said to do positive work on the surroundings. If the system contracts, in the present article it is said to do negative work on the surroundings. Pressure–volume work is a kind of contact work, because it occurs through direct material contact with the surrounding wall or matter at the boundary of the system. It is accurately described by changes in state variables of the system, such as the time courses of changes in the pressure and volume of the system. The volume of the system is classified as a "deformation variable", and is properly measured externally to the system, in the surroundings. Pressure–volume work can have either positive or negative sign. Pressure–volume work, performed slowly enough, can be made to approach the fictive reversible quasi-static ideal. Non-mechanical work also contrasts with shaft work. Shaft work is the other of the two mainly considered kinds of mechanical contact work. It transfers energy by rotation, but it does not eventually change the shape or volume of the system. Because it does not change the volume of the system it is not measured as pressure–volume work, and it is called isochoric work. Considered solely in terms of the eventual difference between initial and final shapes and volumes of the system, shaft work does not make a change. During the process of shaft work, for example the rotation of a paddle, the shape of the system changes cyclically, but this does not make an eventual change in the shape or volume of the system. Shaft work is a kind of contact work, because it occurs through direct material contact with the surrounding matter at the boundary of the system. A system that is initially in a state of thermodynamic equilibrium cannot initiate any change in its internal energy. In particular, it cannot initiate shaft work. This explains the curious use of the phrase "inanimate material agency" by Kelvin in one of his statements of the second law of thermodynamics. Thermodynamic operations or changes in the surroundings are considered to be able to create elaborate changes such as indefinitely prolonged, varied, or ceased rotation of a driving shaft, while a system that starts in a state of thermodynamic equilibrium is inanimate and cannot spontaneously do that. [ 39 ] Thus the sign of shaft work is always negative, work being done on the system by the surroundings. Shaft work can hardly be done indefinitely slowly; consequently it always produces entropy within the system, because it relies on friction or viscosity within the system for its transfer. [ 40 ] The foregoing comments about shaft work apply only when one ignores that the system can store angular momentum and its related energy. Examples of non-mechanical work modes include Gravitational work is defined by the force on a body measured in a gravitational field . It may cause a generalized displacement in the form of change of the spatial distribution of the matter within the system. The system gains internal energy (or other relevant cardinal quantity of energy, such as enthalpy) through internal friction. As seen by the surroundings, such frictional work appears as mechanical work done on the system, but as seen by the system, it appears as transfer of energy as heat. When the system is in its own state of internal thermodynamic equilibrium, its temperature is uniform throughout. If the volume and other extensive state variables, apart from entropy, are held constant over the process, then the transferred heat must appear as increased temperature and entropy; in a uniform gravitational field, the pressure of the system will be greater at the bottom than at the top. By definition, the relevant cardinal energy function is distinct from the gravitational potential energy of the system as a whole; the latter may also change as a result of gravitational work done by the surroundings on the system. The gravitational potential energy of the system is a component of its total energy, alongside its other components, namely its cardinal thermodynamic (e.g. internal) energy and its kinetic energy as a whole system in motion.
https://en.wikipedia.org/wiki/Work_(thermodynamics)
Work 4.0 ( German : Arbeit 4.0 ) is the conceptual umbrella under which the future of work is discussed in Germany and, to some extent, within the European Union . [ 1 ] It describes how the world of work may change until 2030 [ 2 ] and beyond in response to the developments associated with Industry 4.0 , including widespread digitalization . [ 3 ] The concept was first introduced in November 2015 by the German Federal Ministry of Labour and Social Affairs (BMAS) when it launched a report entitled Re-Imagining Work: Green Paper Work 4.0 . [ 4 ] It has since then been taken up by trade unions such as the DGB [ 5 ] and various employers' and industry association such as the VDMA [ 6 ] and the BDA . [ 7 ] At the global level, similar topics are addressed by the World Bank 's 2019 World Development Report The Changing Nature of Work [ 8 ] and ILO 's Future of Work Centenary Initiative . [ 9 ] Conceptually, Work 4.0 reflects the current fourth phase of work relations, having been preceded by the birth of industrial society and the first workers' organizations in the late 18th century (Work 1.0), the beginning of mass production and of the welfare state in the late 19th century (Work 2.0), and the advent of globalization , digitalization and the transformation of the social market economy since the 1970s (Work 3.0). By contrast, Work 4.0 is characterized by a high degree of integration and cooperation, the use of digital technologies (e.g. the internet ), and a rise in flexible work arrangements. [ 10 ] Its drivers include digitalization , globalization , demographic change (ageing, migration ), and cultural change. [ 11 ] Challenges include In response to these challenges, the BMAS has developed a "vision for quality jobs in the digital age", based on policies such as moving from unemployment to employment insurance , the promotion of self-determined flexible working time arrangements, improvements in the working conditions of the service sector, new ergonomic approaches to occupational health and safety , high standards in employee data protection , the co-determination and participation of social partners in employment relations, better social protection for self-employed persons, and the beginning of a European dialogue on the future of the welfare state. [ 13 ] The Fourth Industrial Revolution, 4IR or Industry 4.0 marks a rapid change to how technology influences industry, society, processes and operations in the 21st century. The term has become increasingly common in literature, media and scientific documentation and has continued to evolve since its inception in 2015. Under the Industry 4.0 umbrella is Work 4.0, a term coined by Germany and used within the European Union to describe significant changes to the world of work until 2030. This concept was also introduced in 2015 and has subsequently been adopted by trade unions, discussed by the World Bank, and become part of a global technology shift in mindset and approach. Both Industry 4.0 and Work 4.0 are powered by transformation and digitalization and the evolution of technology and its inherent connectivity and capability have given rise to other applications and approaches. One such approach is Safety 4.0. this is a fundamental shift in the safety management and technologies designed to create a standardized framework that focuses on new practices and processes that shift safety engagements between workers and machinery. Safety 4.0 offers a new level of Industry 4.0 by ensuring that people sit at the center of engagements across plants, machinery and systems. It allows for increases in productivity, worker efficiency and connectivity while allowing for the secure automation of safety protocols, technologies and systems. The World Development Report 2019 argues that a new social contract is needed to address longer work transitions. [ 14 ] Authors Simeon Djankov and Federica Saliola documents examples of countries and companies that have created new ways to deliver social insurance. Work 4.0 has also emerged as a core topic of discussion for the WEF during its annual meetings in Davos . Referring to this phenomenon as the Fourth Industrial Revolution , it integrates concepts from Synthetic biology , Artificial intelligence , and Additive Manufacturing . [ 15 ] However, some speculate that this push to automate is less a technological edict and more a hidden agenda by corporations to replace laborers with Industrial automation . [ 16 ] [ 17 ]
https://en.wikipedia.org/wiki/Work_4.0
A work-breakdown structure ( WBS ) [ 2 ] in project management and systems engineering is a breakdown of a project into smaller components. It is a key project management element that organizes the team's work into manageable sections. The Project Management Body of Knowledge defines the work-breakdown structure as a "hierarchical decomposition of the total scope of work to be carried out by the project team to accomplish the project objectives and create the required deliverables." [ 3 ] : 434 [ 4 ] A WBS provides the necessary framework for detailed cost estimation and control while providing guidance for schedule development and control. [ 5 ] WBS is a hierarchical and incremental decomposition of the project into deliverables (from major ones such as phases to the smallest ones, sometimes known as work packages). It is a tree structure , which shows a subdivision of effort required to achieve an objective, for example, a program , project , and contract . [ 6 ] In a project or contract, the WBS is developed by starting with the end objective and successively subdividing it into manageable components in terms of size, duration, and responsibility (e.g., systems, subsystems, components, tasks , subtasks, and work packages) which include all steps necessary to achieve the objective. The work breakdown structure provides a common framework for the natural development of the overall planning and control of a contract and is the basis for dividing work into definable increments from which the statement of work can be developed and technical, schedule, cost, and labor hour reporting can be established. [ 6 ] A work breakdown structure permits the summing of subordinate costs for tasks, materials, etc., into their successively higher level "parent" tasks, materials, etc. For each element of the work breakdown structure, a description of the task to be performed is generated. [ 7 ] This technique (sometimes called a system breakdown structure [ 8 ] ) is used to define and organize the total scope of a project . The WBS is organized around the primary products of the project (or planned outcomes) instead of the work needed to produce the products (planned actions). Since the planned outcomes are the desired ends of the project, they form a relatively stable set of categories in which the costs of the planned actions needed to achieve them can be collected. A well-designed WBS makes it easy to assign each project activity to one and only one terminal element of the WBS. In addition to its function in cost accounting, the WBS also helps map requirements from one level of system specification to another, for example, a cross-reference matrix mapping functional requirements to high level or low-level design documents. The WBS may be displayed horizontally in outline form or vertically as a tree structure (like an organization chart). [ 9 ] The development of the WBS normally occurs at the start of a project and precedes detailed project and task planning. Through Progressive elaboration , an iterative process in project management knowledge, the details of project management plan and amount of information will increase, [ 10 ] and initial estimates of items such as project scope description, planning, budget, etc. will become more accurate. [ 11 ] It also helps the project team to make the project plan with more details. [ 12 ] PMI's Practice Standard for Work Breakdown Structures identifies two major types of work breakdown structures. Deliverable-oriented WBS, also known as Product breakdown structure uses key deliverables to group each work in the project. Phase-oriented WBS groups the work under key phases or stages of project lifecycle. The concept of work breakdown structure was developed with the Program Evaluation and Review Technique (PERT) by the United States Department of Defense (DoD). PERT was introduced by the U.S. Navy in 1957 to support the development of its Polaris missile program. [ 13 ] While the term "work breakdown structure" was not used, this first implementation of PERT did organize the tasks into product-oriented categories. [ 14 ] By June 1962, DoD, NASA , and the aerospace industry published a document for the PERT/COST system, which described the WBS approach. [ 15 ] This guide was endorsed by the Secretary of Defense for adoption by all services. [ 16 ] In 1968, the DoD issued "Work Breakdown Structures for Defense Materiel Items" (MIL-STD-881), a military standard requiring the use of work breakdown structures across the DoD. [ 17 ] The document has been revised several times. As of May 2023, the most recent revision is F, released 13 May 2022. The version history and current revision of the standard are posted on the Defense Logistics Agency (DLA) ASSIST web site. [1] It includes WBS definitions for specific defense materiel commodity systems and addresses WBS elements that are common to all systems. Defense Materiel Item categories from MIL-STD-881F are: The common elements identified in MIL-STD-881F, Appendix K are: Integration, assembly, test, and checkout; Systems engineering; Program management; System test and evaluation; Data; Peculiar support equipment; Common support equipment; Operational/Site activation; Contractor Logistics Support; Industrial facilities; Initial spares and repair parts. The standard also includes additional common elements unique to Space Systems, Launch Vehicle Systems, and Strategic Missile Systems. In 1987, the Project Management Institute (PMI) documented expanding these techniques across non-defense organizations. The Project Management Body of Knowledge (PMBOK) Guide provides an overview of the WBS concept, while the "Practice Standard for Work Breakdown Structures" is comparable to the DoD standard but is intended for more general application. [ 18 ] An important design principle for work breakdown structures is called the 100% rule. [ 19 ] It has been defined as follows: Mutually exclusive : In addition to the 100% rule, there must be no overlap in scope definition between different elements of a work breakdown structure. This ambiguity could result in duplicated work or miscommunications about responsibility and authority. Such overlap could also confuse project cost accounting. If the work breakdown structure designer attempts to capture any action-oriented details in the WBS, the designer will likely include either too many actions or too few actions. Too many actions will exceed 100% of the parent's scope, and too few will fall short of 100% of the parent's scope. The best way to adhere to the 100% rule is to define WBS elements in terms of outcomes or results, not actions. This also ensures that the WBS is not overly prescriptive of methods, allowing for greater ingenuity and creative thinking on the part of the project participants. When a project provides professional services, a common technique is to capture all planned deliverables to create a deliverable-oriented WBS. [ 21 ] Work breakdown structures that subdivide work by project phases (e.g. preliminary design phase, critical design phase) must ensure that phases are clearly separated by a deliverable also used in defining entry and exit criteria (e.g., an approved preliminary or critical design review ). For new product development projects, the most common technique to ensure an outcome-oriented WBS is to use a product breakdown structure (PBS). Feature-driven software projects may use a similar technique as the WBS, which is to use a feature breakdown structure. One must decide when to stop dividing work into smaller elements. For most projects, a hierarchy of two to four levels will suffice. This will assist in determining the duration of activities necessary to produce a deliverable defined by the WBS. There are several heuristics or "rules of thumb" used when determining the appropriate duration of an activity or group of activities necessary to produce a specific deliverable defined by the WBS. According to the Project Management Institute , a work package is the "lowest level of the work breakdown structure for which cost and duration are estimated and managed." [ 4 ] A work package at the activity level is a task that: If the WBS element names are ambiguous, a WBS dictionary can help clarify the distinctions between WBS elements. The WBS Dictionary describes each component of the WBS with milestones , deliverables, activities, scope, and sometimes dates, resources , costs, quality. According to the Project Management Institute , the WBS dictionary is defined as a "document that provides detailed deliverable, activity, and scheduling information about each component in the work breakdown structure." [ 4 ] It is common for work breakdown structure elements to be numbered sequentially to reveal the hierarchical structure. The purpose of the numbering is to provide a consistent approach to identifying and managing the WBS across like systems regardless of vendor or service. [ 22 ] For example, 1.1.2 Propulsion (in the example below) identifies this item as a Level 3 WBS element, since there are three numbers separated by two decimal points . A coding scheme also helps WBS elements to be recognized in any written context, such as progress tracking, scheduling, or billing, and allows for mapping to the WBS Dictionary. [ citation needed ] It is a preferred practice that the Statement of work or other contract descriptive include the same section terms and hierarchical structure as the WBS. A practical example of the WBS coding scheme is [ 23 ] 1.0 Aircraft System The lowest element in a tree structure , a terminal element, is one that is not further subdivided. In a Work Breakdown Structure such elements (activity or deliverable ), also known as work packages, are the items that are estimated in terms of resource requirements , budget and duration; linked by dependencies ; and schedule. At the juncture of the WBS element and organization unit, control accounts and work packages are established, and performance is planned, measured, recorded, and controlled. [ 24 ] A WBS can be expressed down to any level of interest. Three levels are the minimum recommended, with additional levels for and only for items of high cost or high risk, [ 25 ] and two levels of detail at cases such as systems engineering or program management, [ 26 ] with the standard showing examples of WBS with varying depth such as software development at points going to 5 levels [ 27 ] or fire-control system to 7 levels. [ 28 ] The higher WBS structure should be consistent with whatever norms or template mandates exist within the organization or domain. For example, shipbuilding for the U.S. Navy must respect that the nautical terms and their hierarchy structure put into MIL-STD [ 29 ] are embedded in Naval Architecture [ 30 ] and that matching Navy offices and procedures have been built to match this naval architecture structure, so any significant change of WBS element numbering or naming in the hierarchy would be unacceptable. The adjacent figure shows a work breakdown structure construction technique that demonstrates the 100% rule and the "progressive elaboration" technique. At WBS Level 1 it shows 100 units of work as the total scope of a project to design and build a custom bicycle. At WBS Level 2, the 100 units are divided into seven elements. The number of units allocated to each element of work can be based on effort or cost; it is not an estimate of task duration. The three largest elements of WBS Level 2 are further subdivided at Level 3. The two largest elements at Level 3 each represent only 17% of the total scope of the project. These larger elements could be further subdivided using the progressive elaboration technique described above. This is an example of the product-based approach (which might be end-product or deliverable or work-based), as compared to phased approach (which might be gated stages in a formal Systems development life cycle ), or forced events (e.g. quarterly updates or a fiscal year rebudgeting), or a skills/roles based approach. WBS design can be supported by software (e.g. a spreadsheet ) to allow automatic rolling up of point values. Estimates of effort or cost can be developed through discussions among project team members. This collaborative technique builds greater insight into scope definitions, underlying assumptions, and consensus regarding the level of granularity required to manage the projects. [ 31 ]
https://en.wikipedia.org/wiki/Work_breakdown_structure
In solid-state physics , the work function (sometimes spelled workfunction ) is the minimum thermodynamic work (i.e., energy) needed to remove an electron from a solid to a point in the vacuum immediately outside the solid surface. Here "immediately" means that the final electron position is far from the surface on the atomic scale, but still too close to the solid to be influenced by ambient electric fields in the vacuum. The work function is not a characteristic of a bulk material, but rather a property of the surface of the material (depending on crystal face and contamination). The work function W for a given surface is defined by the difference [ 1 ] where − e is the charge of an electron , ϕ is the electrostatic potential in the vacuum nearby the surface, and E F is the Fermi level ( electrochemical potential of electrons) inside the material. The term − eϕ is the energy of an electron at rest in the vacuum nearby the surface. In practice, one directly controls E F by the voltage applied to the material through electrodes, and the work function is generally a fixed characteristic of the surface material. Consequently, this means that when a voltage is applied to a material, the electrostatic potential ϕ produced in the vacuum will be somewhat lower than the applied voltage, the difference depending on the work function of the material surface. Rearranging the above equation, one has where V = − E F / e is the voltage of the material (as measured by a voltmeter , through an attached electrode), relative to an electrical ground that is defined as having zero Fermi level. The fact that ϕ depends on the material surface means that the space between two dissimilar conductors will have a built-in electric field , when those conductors are in total equilibrium with each other (electrically shorted to each other, and with equal temperatures). The work function refers to removal of an electron to a position that is far enough from the surface (many nm) that the force between the electron and its image charge in the surface can be neglected. [ 1 ] The electron must also be close to the surface compared to the nearest edge of a crystal facet, or to any other change in the surface structure, such as a change in the material composition, surface coating or reconstruction. The built-in electric field that results from these structures, and any other ambient electric field present in the vacuum are excluded in defining the work function. [ 2 ] Certain physical phenomena are highly sensitive to the value of the work function. The observed data from these effects can be fitted to simplified theoretical models, allowing one to extract a value of the work function. These phenomenologically extracted work functions may be slightly different from the thermodynamic definition given above. For inhomogeneous surfaces, the work function varies from place to place, and different methods will yield different values of the typical "work function" as they average or select differently among the microscopic work functions. [ 9 ] Many techniques have been developed based on different physical effects to measure the electronic work function of a sample. One may distinguish between two groups of experimental methods for work function measurements: absolute and relative. The work function is important in the theory of thermionic emission , where thermal fluctuations provide enough energy to "evaporate" electrons out of a hot material (called the 'emitter') into the vacuum. If these electrons are absorbed by another, cooler material (called the collector ) then a measurable electric current will be observed. Thermionic emission can be used to measure the work function of both the hot emitter and cold collector. Generally, these measurements involve fitting to Richardson's law , and so they must be carried out in a low temperature and low current regime where space charge effects are absent. In order to move from the hot emitter to the vacuum, an electron's energy must exceed the emitter Fermi level by an amount determined simply by the thermionic work function of the emitter. If an electric field is applied towards the surface of the emitter, then all of the escaping electrons will be accelerated away from the emitter and absorbed into whichever material is applying the electric field. According to Richardson's law the emitted current density (per unit area of emitter), J e (A/m 2 ), is related to the absolute temperature T e of the emitter by the equation: where k is the Boltzmann constant and the proportionality constant A e is the Richardson's constant of the emitter. In this case, the dependence of J e on T e can be fitted to yield W e . The same setup can be used to instead measure the work function in the collector, simply by adjusting the applied voltage. If an electric field is applied away from the emitter instead, then most of the electrons coming from the emitter will simply be reflected back to the emitter. Only the highest energy electrons will have enough energy to reach the collector, and the height of the potential barrier in this case depends on the collector's work function, rather than the emitter's. The current is still governed by Richardson's law. However, in this case the barrier height does not depend on W e . The barrier height now depends on the work function of the collector, as well as any additional applied voltages: [ 11 ] where W c is the collector's thermionic work function, Δ V ce is the applied collector–emitter voltage, and Δ V S is the Seebeck voltage in the hot emitter (the influence of Δ V S is often omitted, as it is a small contribution of order 10 mV). The resulting current density J c through the collector (per unit of collector area) is again given by Richardson's Law , except now where A is a Richardson-type constant that depends on the collector material but may also depend on the emitter material, and the diode geometry. In this case, the dependence of J c on T e , or on Δ V ce , can be fitted to yield W c . This retarding potential method is one of the simplest and oldest methods of measuring work functions, and is advantageous since the measured material (collector) is not required to survive high temperatures. The photoelectric work function is the minimum photon energy required to liberate an electron from a substance, in the photoelectric effect . If the photon's energy is greater than the substance's work function, photoelectric emission occurs and the electron is liberated from the surface. Similar to the thermionic case described above, the liberated electrons can be extracted into a collector and produce a detectable current, if an electric field is applied into the surface of the emitter. Excess photon energy results in a liberated electron with non-zero kinetic energy. It is expected that the minimum photon energy ℏ ω {\displaystyle \hbar \omega } required to liberate an electron (and generate a current) is where W e is the work function of the emitter. Photoelectric measurements require a great deal of care, as an incorrectly designed experimental geometry can result in an erroneous measurement of work function. [ 9 ] This may be responsible for the large variation in work function values in scientific literature. Moreover, the minimum energy can be misleading in materials where there are no actual electron states at the Fermi level that are available for excitation. For example, in a semiconductor the minimum photon energy would actually correspond to the valence band edge rather than work function. [ 12 ] Of course, the photoelectric effect may be used in the retarding mode, as with the thermionic apparatus described above. In the retarding case, the dark collector's work function is measured instead. The Kelvin probe technique relies on the detection of an electric field (gradient in ϕ ) between a sample material and probe material. The electric field can be varied by the voltage Δ V sp that is applied to the probe relative to the sample. If the voltage is chosen such that the electric field is eliminated (the flat vacuum condition), then Since the experimenter controls and knows Δ V sp , then finding the flat vacuum condition gives directly the work function difference between the two materials. The only question is, how to detect the flat vacuum condition? Typically, the electric field is detected by varying the distance between the sample and probe. When the distance is changed but Δ V sp is held constant, a current will flow due to the change in capacitance . This current is proportional to the vacuum electric field, and so when the electric field is neutralized no current will flow. Although the Kelvin probe technique only measures a work function difference, it is possible to obtain an absolute work function by first calibrating the probe against a reference material (with known work function) and then using the same probe to measure a desired sample. [ 10 ] The Kelvin probe technique can be used to obtain work function maps of a surface with extremely high spatial resolution, by using a sharp tip for the probe (see Kelvin probe force microscope ). The work function depends on the configurations of atoms at the surface of the material. For example, on polycrystalline silver the work function is 4.26 eV, but on silver crystals it varies for different crystal faces as (100) face : 4.64 eV, (110) face : 4.52 eV, (111) face : 4.74 eV. [ 13 ] Ranges for typical surfaces are shown in the table below. [ 14 ] Due to the complications described in the modelling section below, it is difficult to theoretically predict the work function with accuracy. However, various trends have been identified. The work function tends to be smaller for metals with an open lattice, [ clarification needed ] and larger for metals in which the atoms are closely packed. It is somewhat higher on dense crystal faces than open crystal faces, also depending on surface reconstructions for the given crystal face. The work function is not simply dependent on the "internal vacuum level" inside the material (i.e., its average electrostatic potential), because of the formation of an atomic-scale electric double layer at the surface. [ 7 ] This surface electric dipole gives a jump in the electrostatic potential between the material and the vacuum. A variety of factors are responsible for the surface electric dipole. Even with a completely clean surface, the electrons can spread slightly into the vacuum, leaving behind a slightly positively charged layer of material. This primarily occurs in metals, where the bound electrons do not encounter a hard wall potential at the surface but rather a gradual ramping potential due to image charge attraction. The amount of surface dipole depends on the detailed layout of the atoms at the surface of the material, leading to the variation in work function for different crystal faces. In a semiconductor , the work function is sensitive to the doping level at the surface of the semiconductor. Since the doping near the surface can also be controlled by electric fields , the work function of a semiconductor is also sensitive to the electric field in the vacuum. The reason for the dependence is that, typically, the vacuum level and the conduction band edge retain a fixed spacing independent of doping. This spacing is called the electron affinity (note that this has a different meaning than the electron affinity of chemistry); in silicon for example the electron affinity is 4.05 eV. [ 16 ] If the electron affinity E EA and the surface's band-referenced Fermi level E F - E C are known, then the work function is given by where E C is taken at the surface. From this one might expect that by doping the bulk of the semiconductor, the work function can be tuned. In reality, however, the energies of the bands near the surface are often pinned to the Fermi level, due to the influence of surface states . [ 17 ] If there is a large density of surface states, then the work function of the semiconductor will show a very weak dependence on doping or electric field. [ 18 ] Theoretical modeling of the work function is difficult, as an accurate model requires a careful treatment of both electronic many body effects and surface chemistry ; both of these topics are already complex in their own right. One of the earliest successful models for metal work function trends was the jellium model, [ 19 ] which allowed for oscillations in electronic density nearby the abrupt surface (these are similar to Friedel oscillations ) as well as the tail of electron density extending outside the surface. This model showed why the density of conduction electrons (as represented by the Wigner–Seitz radius r s ) is an important parameter in determining work function. The jellium model is only a partial explanation, as its predictions still show significant deviation from real work functions. More recent models have focused on including more accurate forms of electron exchange and correlation effects, as well as including the crystal face dependence (this requires the inclusion of the actual atomic lattice, something that is neglected in the jellium model). [ 7 ] [ 20 ] The electron behavior in metals varies with temperature and is largely reflected by the electron work function. A theoretical model for predicting the temperature dependence of the electron work function, developed by Rahemi et al. [ 21 ] explains the underlying mechanism and predicts this temperature dependence for various crystal structures via calculable and measurable parameters. In general, as the temperature increases, the EWF decreases via φ ( T ) = φ 0 − γ ( k B T ) 2 φ 0 {\textstyle \varphi (T)=\varphi _{0}-\gamma {\frac {(k_{\text{B}}T)^{2}}{\varphi _{0}}}} and γ {\displaystyle \gamma } is a calculable material property which is dependent on the crystal structure (for example, BCC, FCC). φ 0 {\displaystyle \varphi _{0}} is the electron work function at T=0 and k B {\displaystyle k_{\text{B}}} is constant throughout the change. For a quick reference to values of work function of the elements:
https://en.wikipedia.org/wiki/Work_function
Work hardening , also known as strain hardening , is the process by which a material's load-bearing capacity (strength) increases during plastic (permanent) deformation. This characteristic is what sets ductile materials apart from brittle materials. [ 1 ] Work hardening may be desirable, undesirable, or inconsequential, depending on the application. This strengthening occurs because of dislocation movements and dislocation generation within the crystal structure of the material. [ 2 ] Many non-brittle metals with a reasonably high melting point as well as several polymers can be strengthened in this fashion. [ 3 ] Alloys not amenable to heat treatment , including low-carbon steel, are often work-hardened. Some materials cannot be work-hardened at low temperatures, such as indium , [ 4 ] however others can be strengthened only via work hardening, such as pure copper and aluminum. [ 5 ] An example of undesirable work hardening is during machining when early passes of a cutter inadvertently work-harden the workpiece surface, causing damage to the cutter during the later passes. Certain alloys are more prone to this than others; superalloys such as Inconel require materials science machining strategies that take it into account. For metal objects designed to flex, such as springs , specialized alloys are usually employed in order to avoid work hardening (a result of plastic deformation ) and metal fatigue , with specific heat treatments required to obtain the necessary characteristics. An example of desirable work hardening is that which occurs in metalworking processes that intentionally induce plastic deformation to exact a shape change. These processes are known as cold working or cold forming processes. They are characterized by shaping the workpiece at a temperature below its recrystallization temperature, usually at ambient temperature . [ 6 ] Cold forming techniques are usually classified into four major groups: squeezing , bending , drawing , and shearing . Applications include the heading of bolts and cap screws and the finishing of cold rolled steel . In cold forming, metal is formed at high speed and high pressure using tool steel or carbide dies. The cold working of the metal increases the hardness, yield strength , and tensile strength. [ 7 ] Before work hardening, the lattice of the material exhibits a regular, nearly defect-free pattern (almost no dislocations). The defect-free lattice can be created or restored at any time by annealing . As the material is work hardened it becomes increasingly saturated with new dislocations, and more dislocations are prevented from nucleating (a resistance to dislocation-formation develops). This resistance to dislocation-formation manifests itself as a resistance to plastic deformation; hence, the observed strengthening. In metallic crystals, this is a reversible process and is usually carried out on a microscopic scale by defects called dislocations, which are created by fluctuations in local stress fields within the material culminating in a lattice rearrangement as the dislocations propagate through the lattice. At normal temperatures the dislocations are not annihilated by annealing. Instead, the dislocations accumulate, interact with one another, and serve as pinning points or obstacles that significantly impede their motion. This leads to an increase in the yield strength of the material and a subsequent decrease in ductility. Such deformation increases the concentration of dislocations which may subsequently form low-angle grain boundaries surrounding sub-grains. Cold working generally results in a higher yield strength as a result of the increased number of dislocations and the Hall–Petch effect of the sub-grains, and a decrease in ductility. The effects of cold working may be reversed by annealing the material at high temperatures where recovery and recrystallization reduce the dislocation density. A material's work hardenability can be predicted by analyzing a stress–strain curve , or studied in context by performing hardness tests before and after a process. [ 8 ] [ 9 ] Work hardening is a consequence of plastic deformation, a permanent change in shape. This is distinct from elastic deformation, which is reversible. Most materials do not exhibit only one or the other, but rather a combination of the two. The following discussion mostly applies to metals, especially steels, which are well studied. Work hardening occurs most notably for ductile materials such as metals. Ductility is the ability of a material to undergo plastic deformations before fracture (for example, bending a steel rod until it finally breaks). The tensile test is widely used to study deformation mechanisms. This is because under compression, most materials will experience trivial (lattice mismatch) and non-trivial (buckling) events before plastic deformation or fracture occur. Hence the intermediate processes that occur to the material under uniaxial compression before the incidence of plastic deformation make the compressive test fraught with difficulties. A material generally deforms elastically under the influence of small forces ; the material returns quickly to its original shape when the deforming force is removed. This phenomenon is called elastic deformation . This behavior in materials is described by Hooke's law . Materials behave elastically until the deforming force increases beyond the elastic limit , which is also known as the yield stress. At that point, the material is permanently deformed and fails to return to its original shape when the force is removed. This phenomenon is called plastic deformation . For example, if one stretches a coil spring up to a certain point, it will return to its original shape, but once it is stretched beyond the elastic limit, it will remain deformed and won't return to its original state. Elastic deformation stretches the bonds between atoms away from their equilibrium radius of separation, without applying enough energy to break the inter-atomic bonds. Plastic deformation, on the other hand, breaks inter-atomic bonds, and therefore involves the rearrangement of atoms in a solid material. In materials science parlance, dislocations are defined as line defects in a material's crystal structure. The bonds surrounding the dislocation are already elastically strained by the defect compared to the bonds between the constituents of the regular crystal lattice. Therefore, these bonds break at relatively lower stresses, leading to plastic deformation. The strained bonds around a dislocation are characterized by lattice strain fields. For example, there are compressively strained bonds directly next to an edge dislocation and strained in tension bonds beyond the end of an edge dislocation. These form compressive strain fields and tensile strain fields, respectively. Strain fields are analogous to electric fields in certain ways. Specifically, the strain fields of dislocations obey similar laws of attraction and repulsion; in order to reduce overall strain, compressive strains are attracted to tensile strains, and vice versa. The visible ( macroscopic ) results of plastic deformation are the result of microscopic dislocation motion. For example, the stretching of a steel rod in a tensile tester is accommodated through dislocation motion on the atomic scale. Increase in the number of dislocations is a quantification of work hardening. Plastic deformation occurs as a consequence of work being done on a material; energy is added to the material. In addition, the energy is almost always applied fast enough and in large enough magnitude to not only move existing dislocations, but also to produce a great number of new dislocations by jarring or working the material sufficiently enough. New dislocations are generated in proximity to a Frank–Read source . Yield strength is increased in a cold-worked material. Using lattice strain fields, it can be shown that an environment filled with dislocations will hinder the movement of any one dislocation. Because dislocation motion is hindered, plastic deformation cannot occur at normal stresses . Upon application of stresses just beyond the yield strength of the non-cold-worked material, a cold-worked material will continue to deform using the only mechanism available: elastic deformation, the regular scheme of stretching or compressing of electrical bonds (without dislocation motion ) continues to occur, and the modulus of elasticity is unchanged. Eventually the stress is great enough to overcome the strain-field interactions and plastic deformation resumes. However, ductility of a work-hardened material is decreased. Ductility is the extent to which a material can undergo plastic deformation, that is, it is how far a material can be plastically deformed before fracture. A cold-worked material is, in effect, a normal (brittle) material that has already been extended through part of its allowed plastic deformation. If dislocation motion and plastic deformation have been hindered enough by dislocation accumulation, and stretching of electronic bonds and elastic deformation have reached their limit, a third mode of deformation occurs: fracture. The shear strength, τ {\displaystyle \tau } , of a dislocation is dependent on the shear modulus, G, the magnitude of the Burgers vector , b, and the dislocation density, ρ ⊥ {\displaystyle \rho _{\perp }} : where τ 0 {\displaystyle \tau _{0}} is the intrinsic strength of the material with low dislocation density and α {\displaystyle \alpha } is a correction factor specific to the material. As shown in Figure 1 and the equation above, work hardening has a half root dependency on the number of dislocations. The material exhibits high strength if there are either high levels of dislocations (greater than 10 14 dislocations per m 2 ) or no dislocations. A moderate number of dislocations (between 10 7 and 10 9 dislocations per m 2 ) typically results in low strength. For an extreme example, in a tensile test a bar of steel is strained to just before the length at which it usually fractures. The load is released smoothly and the material relieves some of its strain by decreasing in length. The decrease in length is called the elastic recovery, and the result is a work-hardened steel bar. The fraction of length recovered (length recovered/original length) is equal to the yield-stress divided by the modulus of elasticity. (Here we discuss true stress in order to account for the drastic decrease in diameter in this tensile test.) The length recovered after removing a load from a material just before it breaks is equal to the length recovered after removing a load just before it enters plastic deformation. The work-hardened steel bar has a large enough number of dislocations that the strain field interaction prevents all plastic deformation. Subsequent deformation requires a stress that varies linearly with the strain observed, the slope of the graph of stress vs. strain is the modulus of elasticity, as usual. The work-hardened steel bar fractures when the applied stress exceeds the usual fracture stress and the strain exceeds usual fracture strain. This may be considered to be the elastic limit and the yield stress is now equal to the fracture toughness , which is much higher than a non-work-hardened steel yield stress. The amount of plastic deformation possible is zero, which is less than the amount of plastic deformation possible for a non-work-hardened material. Thus, the ductility of the cold-worked bar is reduced. Substantial and prolonged cavitation can also produce strain hardening. There are two common mathematical descriptions of the work hardening phenomenon. Hollomon's equation is a power law relationship between the stress and the amount of plastic strain: [ 10 ] where σ is the stress, K is the strength index or strength coefficient, ε p is the plastic strain and n is the strain hardening exponent . Ludwik's equation is similar but includes the yield stress: [ 11 ] If a material has been subjected to prior deformation (at low temperature) then the yield stress will be increased by a factor depending on the amount of prior plastic strain ε 0 : The constant K is structure dependent and is influenced by processing while n is a material property normally lying in the range 0.2–0.5. The strain hardening index can be described by: This equation can be evaluated from the slope of a log(σ) – log(ε) plot. Rearranging allows a determination of the rate of strain hardening at a given stress and strain: Steel is an important engineering material, used in many applications. Steel may be work hardened by deformation at low temperature, called cold working . Typically, an increase in cold work results in a decrease in the strain hardening exponent [ citation needed ] . Similarly, high strength steels tend to exhibit a lower strain hardening exponent [ citation needed ] . Copper was the first metal in common use for tools and containers since it is one of the few metals available in non-oxidized form, not requiring the smelting of an ore . Copper is easily softened by heating and then cooling (it does not harden by quenching, e.g., quenching in cool water). In this annealed state it may then be hammered, stretched and otherwise formed, progressing toward the desired final shape but becoming harder and less ductile as work progresses. If work continues beyond a certain hardness the metal will tend to fracture when worked and so it may be re-annealed periodically as shaping continues. Annealing is stopped when the workpiece is near its final desired shape, and so the final product will have a desired strength and hardness. The technique of repoussé exploits these properties of copper, enabling the construction of durable jewelry articles and sculptures (such as the Statue of Liberty ). Much gold jewelry is produced by casting, with little or no cold working; which, depending on the alloy grade, may leave the metal relatively soft and bendable. However, a jeweler may intentionally use work hardening to strengthen wearable objects that are exposed to stress, such as rings . Items made from aluminum and its alloys must be carefully designed to minimize or evenly distribute flexure, which can lead to work hardening and, in turn, stress cracking, possibly causing catastrophic failure . For this reason modern aluminum aircraft will have an imposed working lifetime (dependent upon the type of loads encountered), after which the aircraft must be retired.
https://en.wikipedia.org/wiki/Work_hardening
The work loop technique is used in muscle physiology to evaluate the mechanical work and power output of skeletal or cardiac muscle contractions via in vitro muscle testing of whole muscles, fiber bundles or single muscle fibers. This technique is primarily used for cyclical contractions such as cockroach walking., [ 1 ] the rhythmic flapping of bird wings [ 2 ] or the beating of heart ventricular muscle. [ 3 ] To simulate the rhythmic shortening and lengthening of a muscle (e.g. while moving a limb) and natural kinematics, a servo motor oscillates the muscle at a given frequency and range of motion observed in natural behavior. Simultaneously, a burst of electrical pulses is applied to the muscle at the beginning of each shortening-lengthening cycle to stimulate the muscle to produce force. Since force and length return to their initial values at the end of each cycle, a plot of force vs. length yields a 'work loop'. Intuitively, the area enclosed by the loop represents the net mechanical work performed by the muscle during a single cycle. Classical studies from the 1920s through the 1960s characterized the fundamental properties of muscle activation (via action potentials from motor neurons ), force development , length change, shortening velocity, and history dependence. [ 4 ] However, each of these parameters were measured while holding other ones constant, making their interactions unclear. For instance, force-velocity and force-length relationships were determined at constant velocities and loads. Yet during locomotion, neither muscle velocity nor muscle force are constant. In running, for example, muscles in each leg experience time-varying forces and time-varying shortening velocities as the leg decelerates and accelerates from heelstrike to toeoff . In such cases, classical force-length (constant velocity) or force-velocity (constant length) experiments are not sufficient to fully explain muscle function. [ 5 ] In 1960, the work loop method was introduced to explore muscle contractions of both variable speed and variable force. These early work loop experiments characterized the mechanical behavior of asynchronous muscle (a type of insect flight muscle ). [ 6 ] However, due to the specialized nature of asynchronous muscle, the work loop method was only applicable for insect muscle experiments. In 1985, Robert K. Josephson modernized the technique to evaluate properties of synchronous muscles powering katydid flight [ 7 ] by stimulating the muscle at regular time intervals during each shortening-lengthening cycle. Josephson's innovation generalized the work loop technique for wide use among both invertebrate and vertebrate muscle types, profoundly advancing the fields of muscle physiology and comparative biomechanics . Work loop experiments also allowed greater appreciation for the role of activation & relaxing kinetics in muscle power and work output. For instance, if a muscle turns on and off more slowly, the shortening and lengthening curves will be shallower and closer together, resulting in decreased work output. "Negative" work loops were also discovered, showing that muscle lengthening at higher force than the shortening curve can result in net energy absorption by the muscle, as in the case of deceleration or constant-speed downhill walking. In 1992, the work loop approach was extended further by the novel use of bone strain measurements to obtain in vivo force. Combined either with estimates of muscle length changes or with direct methods (e.g. sonomicrometry ), in vivo force technology enabled the first in vivo work loop measurements. [ 8 ] A work loop combines two separate plots: force vs. time and length vs. time. When force is plotted against length, a work loop plot is created: each point along the loop corresponds to a force and a length value at a unique point in time. As time progresses, the plotted points trace the shape of the work loop. The direction in which the work loop is traced through time is a critical feature of the work loop. As the muscle shortens while generating a tensile force (i.e. "pulling"), then, by convention, the muscle is said to be performing positive work ("acts as a motor") during that phase. As the muscle lengthens (while still generating a tensile force), the muscle is performing negative work (or, alternatively, that positive work is being performed on the muscle). Thus, a muscle generating force while shortening is said to output 'positive work' (i.e. generating work), whereas a muscle generating force while lengthening produces 'negative work' (i.e. absorbing work). Over an entire cycle, there is typically some positive, and some negative work; if the overall cycle is counter-clockwise vs. clockwise work loop represents overall work generation vs. work absorption, respectively. [ 9 ] For example during a jump, the leg muscles generate work to increase the body's speed away from the ground, yielding counter-clockwise work loops. When landing, however, the same muscles absorb work to decrease the body's speed, yielding clockwise work loops. Furthermore, a muscle can produce positive work followed by negative work (or vice versa) within a shortening-lengthening cycle, causing a 'figure 8' work loop shape containing both clockwise and counter-clockwise segments. [ 10 ] Since work is defined as force multiplied by displacement, the area of the graph shows the mechanical work output of the muscle. In a typical work-generating instance, the muscle shows a rapid curvilinear rise in force as it shortens, followed by a slower decline during or shortly before the muscle begins the lengthening phase of the cycle. The area beneath the shortening curve (upper curve) gives the total work done by the shortening muscle, while the area beneath the lengthening curve (lower curve) represents the work absorbed by the muscle and turned into heat (done by either environmental forces or antagonistic muscles). Subtracting the latter from the former gives the net mechanical work output of the muscle cycle, and dividing that by the cycle duration gives net mechanical power output. [ 9 ] Hypothetically, a square work loop (area = max force x max displacement) would represent the maximum work output of a muscle operating within a given force and length range. [ 9 ] [ 11 ] Conversely, a flat line (area = 0) would represent the minimum work output. For example, a muscle that generates force without changing length ( isometric contraction ) will show a vertical line 'work loop'. Reciprocally, a muscle that shortens without changing force ( isotonic contraction ) will show a horizontal line 'work loop'. Finally, a muscle can behave like a spring which extends linearly as a force is applied. This final case would yield a slanted straight line 'work loop' where the line slope is the spring stiffness . [ 12 ] Work loop experiments are most often performed on muscle tissue isolated either from invertebrates (e.g. insects [ 7 ] and crustaceans [ 13 ] ) or small vertebrates (e.g. fish , [ 14 ] frogs , [ 10 ] rodents [ 15 ] ). The experimental technique described below applies both to in vitro and in situ approaches. Following humane procedures approved by IACUC , the muscle is isolated from the animal (or prepared in situ), attached to the muscle testing apparatus and bathed in oxygenated Ringer's solution or Krebs-Henseleit solution maintained at a constant temperature. While the isolated muscle is still living, the experimenter then applies two manipulations to test muscle function: 1) Electrical stimulation to mimic the action of a motor neuron and 2) strain (muscle length change) to mimic the rhythmic motion of a limb. To elicit muscle contraction, the muscle is stimulated by a series of electrical pulses delivered by an electrode to stimulate either the motor nerve or the muscle tissue itself. Simultaneously, a computer-controlled servo motor in the testing apparatus oscillates the muscle while measuring the force generated by the stimulated muscle. The following parameters are modulated by the experimenter to influence muscle force, work and power output: Calculation of either muscle work or power requires collection of muscle force and length (or velocity) data at a known sampling rate. Net work is typically calculated either from instantaneous power (muscle force x muscle velocity) or from the area enclosed by the work loop on a force vs. length plot. Both methods are mathematically equivalent and highly accurate, however the 'area inside the loop' method (despite its simplicity) can be tedious to carry out for large data sets. Step 1) Obtain muscle velocity by numerical differentiation of muscle length data. Step 2) Obtain instantaneous muscle power by multiplying muscle force data by muscle velocity data for each time sample. Step 3) Obtain net work (a single number) by numerical integration of muscle power data. Step 4) Obtain net power (a single number) by dividing net work by the time duration of the cycle. The area inside the work loop can be quantified either 1) digitally by importing a work loop image into ImageJ , tracing the work loop shape and quantifying its area. Or, 2) manually by printing a hard copy of the work loop graph, cutting the inner area and weighing it on an analytical balance . Net work is then divided by the time duration of the cycle to obtain net power. A significant advantage of the work loop technique over assessments of skeletal muscle power in humans is that confounding factors associated with skeletal muscle function masks true power production at the skeletal muscle level. The most notable confounding factors include the influence of the central nervous system limiting the ability to generate maximal power output, examinations of whole muscle groups rather than individual skeletal muscles, bodily inertia , and motivational aspects associated with sustained muscle activity. Additionally, measures of muscle fatigue are not affected by fatigue of the central nervous system. By adopting the work loop technique in an isolated muscle model, these confounding factors are eliminated, thus allowing for a closer examination of the muscle-specific changes in work loop power output in response to a stimulus. Moreover, usage of the work loop technique as opposed to other modes of contraction, such as isometric , isotonic and isovelocity , allows for a better representation of the changes in mechanical work of the skeletal muscle in response to an independent variable, such as the direct application of caffeine , [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] sodium bicarbonate , [ 22 ] and taurine [ 23 ] to an isolated skeletal muscle, and the changes in work loop power output and fatigue resistance during ageing [ 24 ] [ 25 ] [ 26 ] and in response to an obesogenic diet. [ 27 ] [ 28 ] Development of the work loop technique has revealed various functional roles for muscle by simulating more realistic kinematics and activation. As a “motor”, the muscle does work on the environment, resulting in a counter-clockwise, positive work loop. When positive work happens the length of the muscle will increase followed by an increase in force before reaching a peak. When the peak is reached the muscle will shorten along with a decrease in force. [ 29 ] One example of positive work done on the environment is eel swimming. [ 30 ] Like other undulatory swimmers, eels move by generating thrust in the water. Eels undulate their body axis using muscles along their trunks, generating mechanical power. As a “brake”, the muscle absorbs energy from the environment. [ 1 ] This clockwise circle results in a muscle that absorbs work, doing negative work on the environment. These muscles shorten while decreasing their force output. After the muscle is done absorbing the energy from the environment, the length of the muscle then returns to normal with increased force. Despite the power production ability of one of the cockroach's leg extensor, muscle activation during the large strains imposed during naturalistic running worked to slow the swing of the leg. [ 1 ] As a “spring” the muscles are able to alter between states of motion, thus producing negligible work. [ 31 ] As springs, these muscles shorten when force is applied and return their resting length when the force released. For example, the large basalar (b1) muscle of the blowfly acts as a passive elastic element: it generates little power. Instead, it stores the energy produced to direct force. [ 32 ] These muscles are likely used to rapidly modulate the mechanics of locomotion rather than acting to produce power. As a “strut”, the muscles generate force isometrically, or evenly while shortening, while allowing the passive elasticity of the tendons to store and release energy. In turkeys running on level surfaces, the lateral gastrocnemius muscle contracts isometrically. While this muscle does not change length, it produces high forces and allows the stretch and recoil of tendon to supply the mechanical work. [ 33 ] Originally, work loops imposed a sinusoidal length change on the muscle, with equal time lengthening and shortening. However, in vivo muscle length change often has greater than half the cycle shortening, and less than half lengthening. Imposing these "asymmetrical" stretch-shorten cycles can result in higher work and power outputs, as shown in treefrog calling muscles. [ 34 ]
https://en.wikipedia.org/wiki/Work_loop
Work measurement is the application of techniques which is designed to establish the time for an average worker to carry out a specified manufacturing task at a defined level of performance. [ 1 ] It is concerned with the duration of time it takes to complete a work task assigned to a specific job. It means the time taken to complete one unit of work or operation it also that the work should completely complete in a complete basis under certain circumstances which take into account of accountants time Work measurement helps to uncover non-standardization that exist in the workplace and non-value adding activities and waste. A work has to be measured for the following reasons: Work Measurement is a technique for establishing a Standard Time, which is the required time to perform a given task, based on time measurements of the work content of the prescribed method, with due consideration for fatigue and for personal and unavoidable delays. Method study is the principal technique for reducing the work involved, primarily by eliminating unnecessary movement on the part of material or operatives and by substituting good methods for poor ones. Work measurement is concerned with investigating, reducing and subsequently eliminating ineffective time, that is time during which no effective work is being performed, whatever the cause. Work measurement, as the name suggests, provides management with a means of measuring the time taken in the performance of an operation or series of operations in such a way that ineffective time is shown up and can be separated from effective time. In this way its existence, nature and extent become known where previously they were concealed within the total. Revealing existing causes of ineffective time through study, important though it is, is perhaps less important in the long term than the setting of sound time standards, since these will continue to apply as long as the work to which they refer continues to be done. They will also show up any ineffective time or additional work which may occur once they have been established. In the process of setting standards it may be necessary to use work measurement : To compare the efficiency of alternative methods. Other conditions being equal, the method which takes the least time will be the best method. To balance the work of members of teams, in association with multiple activity charts, so that, as nearly as possible, each member has a task taking an equal time to perform. To determine, in association with man and machine multiple activity charts, the number of machines an operative can run. The time standards, once set, may then be used: To provide information on which the planning and scheduling of production can be based, including the plant and labour requirements for carrying out the programme of work and the utilisation of available capacity. To provide information on which estimates for tenders, selling prices and delivery promises can be based. To set standards of machine utilisation and labour performance which can be used for any of the above purposes and as a basis for incentive schemes. To provide information for labour-cost control and to enable standard costs to be fixed and maintained. It is thus clear that work measurement provides the basic information necessary for all the activities of organising and controlling the work of an enterprise in which the time element plays a part. Its uses in connection with these activities will be more clearly seen when we have shown how the standard time is obtained. The following are the principal techniques by which work measurement is carried out: Of these techniques we shall concern ourselves primarily with time study, since it is the basic technique of work measurement. Some of the other techniques either derive from it or are variants of it. Time Study consists of recording times and rates of work for elements of a specified job carried out under specified conditions to obtain the time necessary to carry out a job at a defined level of performance. In this technique the job to be studied is timed with a stopwatch, rated, and the Basic Time calculated. The requirements for effective time study are: a. Co-operation and goodwill b. Defined job c. Defined method d. Correct normal equipment e. Quality standard and checks f. Experienced qualified motivated worker g. Method of timing h. Method of assessing relative performance i. Elemental breakdown j. Definition of break points k. Recording media One of the most critical requirements for time study is that of elemental breakdown. There are some general rules concerning the way in which a job should be broken down into elements. They include the following. Elements should be easily identifiable, with definite beginnings and endings so that, once established, they can be repeatedly recognised. These points are known as the break points and should be clearly described on the study sheet. Elements should be as short as can be conveniently timed by the observer. As far as possible, elements – particularly manual ones – should be chosen so that they represent naturally unified and distinct segments of the operation. Time Study is based on a record of observed times for doing a job together with an assessment by the observer of the speed and effectiveness of the worker in relation to the observer's concept of Standard Rating. This assessment is known as rating, the definition being given in BS 3138 (1979): The numerical value or symbol used to denote a rate of working. Standard rating is also defined (in this British Standard BS3138) as: "The rating corresponding to the average rate at which qualified workers will naturally work, provided that they adhere to the specified method and that they are motivated to apply themselves to their work. If the standard rating is consistently maintained and the appropriate relaxation is taken, a qualified worker will achieve standard performance over the working day or shift." Industrial engineers use a variety of rating scales, and one which has achieved wide use is the British Standards Rating Scale which is a scale where 0 corresponds to no activity and 100 corresponds to standard rating. Rating should be expressed as 'X' BS. Below is an illustration of the Standard Scale: Rating walking pace 0 no activity 50 very slow 75 steady 100 brisk (standard rating) 125 very fast 150 exceptionally fast The basic time for a task, or element, is the time for carrying out an element of work or an operation at standard rating. Basic time = observed time x observed rating The result is expressed in basic minutes – BMs. The work content of a job or operation is defined as: basic time + relaxation allowance + any allowance for additional work – e.g. that part of contingency allowance which represents work. Standard time is the total time in which a job should be completed at standard performance i.e. work content, contingency allowance for delay, unoccupied time and interference allowance, where applicable. Allowance for unoccupied time and for interference may be important for the measurement of machine-controlled operations, but they do not always appear in every computation of standard time. Relaxation allowance, on the other hand, has to be taken into account in every computation, whether the job is a simple manual one or a very complex operation requiring the simultaneous control of several machines. A contingency allowance will probably figure quite frequently in the compilation of standard times; it is therefore convenient to consider the contingency allowance and relaxation allowance, so that the sequence of calculation which started with the completion of observations at the workplace may be taken right through to the compilation of standard time. Contingency allowance A contingency allowance is a small allowance of time which may be included in a standard time to meet legitimate and expected items of work or delays, the precise measurement of which is uneconomical because of their infrequent or irregular occurrence. Relaxation allowance A relaxation allowance is an addition to the basic time to provide the worker with the opportunity to recover from physiological and psychological effects of carrying out specified work under specified conditions and to allow attention to personal needs. The amount of the allowance will depend on the nature of the job. Examples are: Personal 5–7% Energy output 0–10% Noisy 0–5% Conditions 0–100% e.g. Electronics 5% Other allowances Other allowances include process allowance which is to cover when an operator is prevented from continuing with their work, although ready and waiting, by the process or machine requiring further time to complete its part of the job. A final allowance is that of Interference which is included whenever an operator has charge of more than one machine and the machines are subject to random stoppage. In normal circumstances the operator can only attend to one machine, and the others must wait for attention. This machine is then subject to interference which increased the machine cycle time. It is now possible to obtain a complete picture of the standard time for a straightforward manual operation. Activity sampling is a technique in which a large number of instantaneous observations are made over a period of time of a group of machines, processes or workers. Each observation records what is happening at that instant and the percentage of observations recorded for a particular activity or delay is a measure of the percentage of time during which the activity or delay occurs. The advantages of this method are that It is capable of measuring many activities that are impractical or too costly to be measured by time study. One observer can collect data concerning the simultaneous activities of a group. Activity sampling can be interrupted at any time without effect. The disadvantages are that It is quicker and cheaper to use time study on jobs of short duration. It does not provide elemental detail. The type of information provided by an activity sampling study is: To determine the number of observations in a full study the following equation is used: Where: A predetermined motion time system is a work measurement technique whereby times established for basic human motions (classified according to the nature of the motion and the conditions under which it is made) are used to build up the time for a job at a defined level of performance. The systems are based on the assumption that all manual tasks can be analysed into basic motions of the body or body members. They were compiled as a result of a very large number of studies of each movement, generally by a frame-by-frame analysis of films of a wide range of subjects, men and women, performing a wide variety of tasks. The first generation of PMT systems, MTM1, were very finely detailed, involving much analysis and producing extremely accurate results. This attention to detail was both a strength and a weakness, and for many potential applications the quantity of detailed analysis was not necessary, and prohibitively time -consuming. In these cases "second generation" techniques, such as Simplified PMTS, Master Standard Data, Primary Standard Data and MTM2, could be used with advantage, and no great loss of accuracy. For even speedier application, where some detail could be sacrificed then a "third generation" technique such as Basic Work Data or MTM3 could be used. Synthesis is a work measurement technique for building up the time for a job at a defined level of performance by totaling element times obtained previously from time studies on other jobs containing the elements concerned, or from synthetic data. Synthetic data is the name given to tables and formulae derived from the analysis of accumulated work measurement data, arranged in a form suitable for building up standard times, machine process times, etc. by synthesis. Synthetic times are increasingly being used as a substitute for individual time studies in the case of jobs made up of elements which have recurred a sufficient number of times in jobs previously studied to make it possible to compile accurate representative times for them. The technique of estimating is the least refined of all those available to the work measurement practitioner. It consists of an estimate of total job duration (or in common practice, the job price or cost). This estimate is made by a craftsman or person familiar with the craft. It normally embraces the total components of the job, including work content, preparation and disposal time, any contingencies etc., all estimated in one gross amount. This technique introduces work measurement concepts into estimating. In analytical estimating the estimator is trained in elemental breakdown, and in the concept of standard performance. The estimate is prepared by first breaking the work content of the job into elements, and then utilising the experience of the estimator (normally a craftsman) the time for each element of work is estimated – at standard performance. These estimated basic minutes are totalled to give a total job time, in basic minutes. An allowance for relaxation and any necessary contingency is then made, as in conventional time study, to give the standard time. This technique has been developed to permit speedy and reliable assessment of the duration of variable and infrequent jobs, by estimating them within chosen time bands. Limits are set within which the job under consideration will fall, rather than in terms of precise capital standard or capital allowed minute values. It is applied by comparing the job to be estimated with jobs of similar work content, and using these similar jobs as "bench marks" to locate the new job in its relevant time band – known as Work Group. The work measurement concept has evolved from the manufacturing world but has not been fully adopted yet to the global shift to the service sector. Certain factors create inherent difficulties in determining standard times for labor allocation in service jobs: (a) wide variation in Time Between Arrivals and Service Performance Time; (b) the difficulty of assessing the damage done to the organization by long customer Waiting Times for service. This difficulty makes it hard to calculate the Break-Even Point between raising worker output, which minimizes labor costs but increases customer Waiting Times and reduces service quality. Dr. Isaac Balayla & Professor Yissachar Gilad from the Technion, Israel, developed the Balayla (Balaila) Model which overcomes most of the above-mentioned difficulties, by taking a multi-domain approach: 1) The model deploys a series of indicators for a correlation between output and Waiting Times. The indicator values are affected by service level of urgency. 2) the model determines the best Break-Even point by comparing the operational cost of an additional worker with the economical benefit caused by the decrease in WT. Thus, the model finds the best balance between worker output and service quality. Balayla I.(2012) A manpower allocation model for service jobs ( Balayla Model ), IJSMET International Journal of Service Science, Management, Engineering, and Technology, 3(2), April–June 2012, pp 13–34.
https://en.wikipedia.org/wiki/Work_measurement
A work of art , artwork , [ 1 ] art piece , piece of art or art object is an artistic creation of aesthetic value. Except for "work of art", which may be used of any work regarded as art in its widest sense, including works from literature and music , these terms apply principally to tangible, physical forms of visual art : Used more broadly, the term is less commonly applied to: This article is concerned with the terms and concepts as used in and applied to the visual arts, although other fields such as aural -music and written word-literature have similar issues and philosophies. The term objet d'art is reserved to describe works of art that are not paintings, prints, drawings or large or medium-sized sculptures, or architecture (e.g. household goods, figurines, etc., some purely aesthetic, some also practical). The term oeuvre is used to describe the complete body of work completed by an artist throughout a career. [ 2 ] A work of art in the visual arts is a physical two- or three- dimensional object that is professionally determined or otherwise considered to fulfill a primarily independent aesthetic function. A singular art object is often seen in the context of a larger art movement or artistic era , such as: a genre , aesthetic convention , culture , or regional-national distinction. [ 3 ] It can also be seen as an item within an artist's "body of work" or oeuvre . The term is commonly used by museum and cultural heritage curators , the interested public, the art patron -private art collector community, and art galleries . [ 4 ] Physical objects that document immaterial or conceptual art works, but do not conform to artistic conventions, can be redefined and reclassified as art objects. Some Dada and Neo-Dada conceptual and readymade works have received later inclusion. Also, some architectural renderings and models of unbuilt projects, such as by Vitruvius , Leonardo da Vinci , Frank Lloyd Wright , and Frank Gehry , are other examples. The products of environmental design , depending on intention and execution, can be "works of art" and include: land art , site-specific art , architecture , gardens , landscape architecture , installation art , rock art , and megalithic monuments . Legal definitions of "work of art" are used in copyright law; see Visual arts § United States of America copyright definition of visual art . Theorists have argued that objects and people do not have a constant meaning, but their meanings are fashioned by humans in the context of their culture, as they have the ability to make things mean or signify something. [ 5 ] A prime example of this theory are the Readymades of Marcel Duchamp . Marcel Duchamp criticized the idea that the work of art must be a unique product of an artist's labour or skill through his "readymades": "mass-produced, commercially available, often utilitarian objects" to which he gave titles, designating them as artwork only through these processes of choosing and naming. [ 6 ] Artist Michael Craig-Martin , creator of An Oak Tree , said of his work – "It's not a symbol. I have changed the physical substance of the glass of water into that of an oak tree. I didn't change its appearance. The actual oak tree is physically present, but in the form of a glass of water." [ 7 ] Some art theorists and writers have long made a distinction between the physical qualities of an art object and its identity-status as an artwork. [ 8 ] For example, a painting by Rembrandt has a physical existence as an " oil painting on canvas" that is separate from its identity as a masterpiece "work of art" or the artist's magnum opus . [ 9 ] Many works of art are initially denied "museum quality" or artistic merit, and later become accepted and valued in museum and private collections. Works by the Impressionists and non-representational abstract artists are examples. Some, such as the readymades of Marcel Duchamp including his infamous urinal Fountain , are later reproduced as museum quality replicas. Research suggests that presenting an artwork in a museum context can affect the perception of it. [ 10 ] There is an indefinite distinction, for current or historical aesthetic items: between " fine art " objects made by " artists "; and folk art , craft-work , or " applied art " objects made by "first, second, or third-world" designers , artisans and craftspeople. Contemporary and archeological indigenous art , industrial design items in limited or mass production , and places created by environmental designers and cultural landscapes , are some examples. The term has been consistently available for debate, reconsideration, and redefinition.
https://en.wikipedia.org/wiki/Work_of_art
In physics , work output is the work done by a simple machine , compound machine , or any type of engine model. In common terms, it is the energy output, which for simple machines is always less than the energy input, even though the forces may be drastically different. In [thermodynamics], work output can refer to the thermodynamic work done by a heat engine , in which case the amount of work output must be less than the input as energy is lost to heat, as determined by the engine's efficiency . This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Work_output
Work sampling is the statistical technique used for determining the proportion of time spent by workers in various defined categories of activity (e.g. setting up a machine, assembling two parts, idle...etc.). [ 1 ] It is as important as all other statistical techniques because it permits quick analysis, recognition, and enhancement of job responsibilities, tasks, performance competencies, and organizational work flows. Other names used for it are 'activity sampling', 'occurrence sampling', and 'ratio delay study'. [ 2 ] In a work sampling study, a large number of observations are made of the workers over an extended period of time. For statistical accuracy, the observations must be taken at random times during the period of study, and the period must be representative of the types of activities performed by the subjects. One important usage of the work sampling technique is the determination of the standard time for a manual manufacturing task. Similar techniques for calculating the standard time are time study , standard data, and predetermined motion time systems . The study of work sampling has some general characteristics related to the work condition: There are several recommended steps when starting to prepare a work sampling study: [ 1 ] After the work elements are defined, the number of observations for the desired accuracy at the desired confidence level must be determined. The formula used in this method is: σ P = p q n {\displaystyle \sigma _{P}={\sqrt {\frac {pq}{n}}}} n = p q σ P 2 {\displaystyle n={\frac {pq}{{\sigma _{P}}^{2}}}} σ P = {\displaystyle \sigma _{P}=} standard error of proportion p = {\displaystyle p=} percentage of working time q = {\displaystyle q=} percentage of idle time n = {\displaystyle n=} number of observations Work sampling was initially developed for determining time allocation among workers' tasks in manufacturing environments. [ 3 ] However, the technique has also been applied more broadly to examine work in a number of different environments, such as healthcare [ 4 ] and construction. [ 5 ] More recently, in the academic fields of organizational psychology and organizational behaviour , the basic technique has been developed into a detailed job analysis method for examining a range of different research questions. [ 6 ]
https://en.wikipedia.org/wiki/Work_sampling
A work system is a socio-technical system in which human participants and/or machines perform tasks using information, technology, and other resources to produce products and services for internal or external customers. Typical business organizations contain work systems that procure materials from suppliers, produce products, deliver products to customers, find customers, create financial reports, hire employees, coordinate work across departments, and perform many other functions. The concept is widely used in understanding IT-reliant systems within organizations and has been a topic of academic study since at least 1977. The term "work system" has been used loosely in many areas. This article concerns its use in understanding IT-reliant systems in organizations. A notable use of the term occurred in 1977 in the first volume of MIS Quarterly in two articles by Bostrom and Heinen. [ 1 ] [ 2 ] Later Sumner and Ryan used it to explain problems in the adoption of CASE ( computer-aided software engineering ). [ 3 ] A number of socio-technical systems researchers such as Trist and Mumford also used the term occasionally, but seemed not to define it in detail. In contrast, the work system approach defines work system carefully and uses it as a basic analytical concept. The work system concept is like a common denominator for many of the types of systems that operate within or across organizations. Operational information systems, service systems, projects, supply chains, and ecommerce web sites can all be viewed as special cases of work systems. The relationship between work systems in general and the special cases implies that the same basic concepts apply to all of the special cases, which also have their own specialized vocabulary. In turn, this implies that much of the body of knowledge for the current information systems discipline can be organized around a work system core. Specific information systems exist to support (other) work systems. Many different degrees of overlap are possible between an information system and a work system that it supports. For example, an information system might provide information for a non-overlapping work system, as happens when a commercial marketing survey provides information to a firm's marketing managers In other cases, an information system may be an integral part of a work system, as happens in highly automated manufacturing and in ecommerce web sites. In these situations, participants in the work system are also participants in the information system, the work system cannot operate properly without the information system, and the information system has little significance outside of the work system. The work system approach for understanding systems includes both a static view of a current (or proposed) system in operation and a dynamic view of how a system evolves over time through planned change and unplanned adaptations. The static view is summarized by the work system framework, which identifies the basic elements for understanding and evaluating a work system. The work system framework is often represented with a triangular scheme. [ 4 ] [ 5 ] [ 6 ] The work system itself consists of four elements: the processes and activities, participants, information, and technologies. Five other elements must be included in even a rudimentary understanding of a work system's operation, context, and significance. Those elements are the products/services produced, customers, environment, infrastructure, and strategies. Customers may also be participants in a work system, as happens when a doctor examines a patient. This framework is prescriptive enough to be useful in describing the system being studied, identifying problems and opportunities, describing possible changes, and tracing how those changes might affect other parts of the work system. The definitions of the 9 elements of the work system framework are as follows: Processes and activities include everything that happens within the work system. The term processes and activities is used instead of the term business process because many work systems do not contain highly structured business processes involving a prescribed sequence of steps, each of which is triggered in a pre-defined manner. Such processes are sometimes described as “artful processes” whose sequence and content “depend on the skills, experience, and judgment of the primary actors.” [ 7 ] In effect, business process is but one of a number of different perspectives for analyzing the activities within a work system. Other perspectives with their own valuable concepts and terminology include decision-making, communication, coordination, control, and information processing. Participants are people who perform the work. Some may use computers and IT extensively, whereas others may use little or no technology. When analyzing a work system the more encompassing role of work system participant is more important than the more limited role of technology user (whether or not particular participants happen to be technology users). In work systems that are viewed as service systems, it is especially important to identify activities in which customers are participants. Information includes codified and non-codified information used and created as participants perform their work. Information may or may not be computerized. Data not related to the work system is not directly relevant, making the distinction between data and information secondary when describing or analyzing a work system. Knowledge can be viewed as a special case of information. Technologies include tools (such as cell phones, projectors, spreadsheet software, and automobiles) and techniques (such as management by objectives, optimization, and remote tracking) that work system participants use while doing their work. Products/services are the combination of physical things, information, and services that the work system produces for its customers' benefit and use. This may include physical products, information products, services, intangibles such as enjoyment and peace of mind, and social products such as arrangements, agreements, and organizations. The term "products/services” is used because the distinction between products and services in marketing and service science [ 8 ] is not important for understanding work systems even though product-like vs. service-like is the basis of a series of design dimensions for characterizing and designing the things that a work system produces. [ 9 ] Customers are people who receive direct benefit from products/services the work system produces. Since work systems exist to produce products/services for their customers, an analysis of a work system should consider who the customers are, what they want, and how they use whatever the work system produces. Customers may include external customers who receive an enterprise's products/services and internal customers who are employed by the enterprise, such as customers of a payroll work system. Customers of a work system often are participants in the work system (e.g., patients in a medical exam, students in an educational setting, and clients in a consulting engagement). Environment includes the organizational, cultural, competitive, technical, and regulatory environment within which the work system operates. These factors affect system performance even though the system does not rely on them directly in order to operate. The organization's general norms of behavior are part of its culture, whereas more specific behavioral norms and expectations about specific activities within the work system are considered part of its processes and activities. Infrastructure includes human, informational, and technical resources that the work system relies on even though these resources exist and are managed outside of it and are shared with other work systems. Technical infrastructure includes computer networks, programming languages, and other technologies shared by other work systems and often hidden or invisible to work system participants. From an organizational viewpoint such as that expressed in Star and Bowker (2002) rather than a purely technical viewpoint, infrastructure includes human infrastructure, informational infrastructure, and technical infrastructure, all of which can be essential to a work system's operation and therefore should be considered in any analysis of a work system. Strategies include the strategies of the work system and of the department(s) and enterprise(s) within which the work system exists. Strategies at the department and enterprise level may help in explaining why the work system operates as it does and whether it is operating properly. The dynamic view of a work system starts with the work system life cycle (WSLC) model, which shows how a work system may evolve through multiple iterations of four phases: operation and maintenance, initiation, development, and implementation. The names of the phases were chosen to describe both computerized and non-computerized systems, and to apply regardless of whether application software is acquired, built from scratch, or not used at all. The terms development and implementation have business-oriented meanings that are consistent with Markus and Mao's distinction between system development and system implementation. [ 10 ] This model encompasses both planned and unplanned change. Planned change occurs through a full iteration encompassing the four phases, i.e., starting with an operation and maintenance phase, flowing through initiation, development, and implementation, and arriving at a new operation and maintenance phase. Unplanned change occurs through fixes, adaptations, and experimentation that can occur within any phase. The phases include the following activities: As an example of the iterative nature of a work system's life cycle, consider the sales system in a software start-up. The first sales system is the CEO selling directly. At some point the CEO can't do it alone, several salespeople are hired and trained, and marketing materials are produced that can be used by someone less expert than the CEO. As the firm grows, the sales system becomes regionalized and an initial version of sales tracking software is developed and used. Later, the firm changes its sales system again to accommodate needs to track and control a larger salesforce and predict sales several quarters in advance. A subsequent iteration might involve the acquisition and configuration of CRM software. The first version of the work system starts with an initiation phase. Each subsequent iteration involves deciding that the current sales system is insufficient; initiating a project that may or may not involve significant changes in software; developing the resources such as procedures, training materials, and software that are needed to support the new version of the work system; and finally, implementing the new work system. The pictorial representation of the work system life cycle model places the four phases at the vertices of rectangle. Forward and backward arrows between each successive pair of phases indicate the planned sequence of the phases and allow the possibility of returning to a previous phase if necessary. To encompass both planned and unplanned change, each phase has an inward facing arrow to denote unanticipated opportunities and unanticipated adaptations, thereby recognizing the importance of diffusion of innovation, experimentation, adaptation, emergent change, and path dependence. The work system life cycle model is iterative and includes both planned and unplanned change. It is fundamentally different from the frequently cited Systems Development Life Cycle (SDLC), which actually describes projects that attempt to produce software or produce changes in a work system. Current versions of the SDLC may contain iterations but they are basically iterations within a project. More important, the system in the SDLC is a basically a technical artifact that is being programmed. In contrast, the system in the WSLC is a work system that evolves over time through multiple iterations. That evolution occurs through a combination of defined projects and incremental changes resulting from small adaptations and experimentation. In contrast with control-oriented versions of the SDLC, the WSLC treats unplanned changes as part of a work system's natural evolution. The work system method [ 11 ] [ 12 ] is a method that business professionals (and/or IT professionals) can use for understanding and analyzing a work system at whatever level of depth is appropriate for their particular concerns. It has evolved iteratively starting in around 1997. At each stage, the then current version was tested by evaluating the areas of success and the difficulties experienced by MBA and EMBA students trying to use it for a practical purpose. A version called “work-centered analysis” that was presented in a textbook has been used by a number of universities as part of the basic explanation of systems in organizations, to help students focus on business issues, and to help student teams communicate. Neil Ramiller reports on using a version of the work system framework within a method for “animating” the idea of business process within an undergraduate class. [ 13 ] In a research setting, Petrie (2004) used the work system framework as a basic analytical tool in a Ph.D. thesis examining 13 ecommerce web sites. Petkov and Petkova (2006) demonstrated the usefulness of the work system framework by comparing grades of students who did and did not learn about the framework before trying to interpret the same ERP case study. More recent evidence of the practical value of a work system approach is from Truex et al. (2010, 2011), which summarized results from 75 and later 300 management briefings produced by employed MBA students based on a work system analysis template. These briefings contained the kind of analysis that would be discussed in the initiation phase of the WSLC, as decisions were being made about which projects to pursue and how to proceed. Results from analyses of real world systems by typical employed MBA and EMBA students indicate that a systems analysis method for business professionals must be much more prescriptive than soft systems methodology (Checkland, 1999). While not a straitjacket, it must be at least somewhat procedural and must provide vocabulary and analysis concepts while at the same time encouraging the user to perform the analysis at whatever level of detail is appropriate for the task at hand. The latest version of the work system method is organized around a general problem-solving outline that includes: In contrast to systems analysis and design methods for IT professionals who need to produce a rigorous, totally consistent definition of a computerized system, the work system method:
https://en.wikipedia.org/wiki/Work_systems
A workcell is an arrangement of resources in a manufacturing environment to improve the quality, speed and cost of the process. Workcells are designed to improve these by improving process flow and eliminating waste. They are based on the principles of Lean Manufacturing as described in The Machine That Changed the World by Womack, Jones and Roos. [ 1 ] Classical manufacturing management approaches dictate that costs be lowered by breaking the process into steps, and ensuring that each of these steps minimizes cost and maximizes efficiency. This discrete approach has resulted in machines placed apart from each other to maximize the efficiency and throughput of each machine. The traditional accounting for machine capitalization is based on the number of parts produced, and this approach reinforces the idea of lowering the cost of each machine (by having them produce as many parts as possible.) Increasing the number of parts (WIP) adds waste in areas such as Inventory and Transportation. Large amounts of excess Inventory often now accumulate between the machines in the process for reasons to do with 'unbalanced' line capacities and batch processing. In addition, the parts must now be transported between the machines. An increase in the number of machines involved also will reduce each worker's multi-skilling proficiency (since that would need them to learn how to operate multiple machines, and they too will need to move between those machines.) Lean Manufacturing focuses on optimizing the end-to-end process as a whole. This enables a focus in the process on creating a finished product at the lowest cost (instead of lowering the cost of each step.) A common approach to achieving this is known as the workcell. Machines involved in building a product are placed next to each other to minimize transportation of both parts and people (an L-shaped desk with upper shelves is a good office example, which enables many types of office equipment to be within the reach of a worker). This will minimize waste in both transportation and in the storage of excess inventory. At first glance, lean workcells may appear to be similar to traditional workcells, but they are inherently different. For instance, lean workcells must be designed for minimal wasted motion, which refers to any unnecessary time and effort required to assemble a product. Excessive twists or turns, uncomfortable reaches or pickups, and unnecessary walking all contribute to wasted motion and may put error inducing stress upon the operator. Workcells can often be reconfigured easily to allow the adaptation of the process to fit takt time . This flexibility allows the work content to be adapted as demand or product mix changes. Another Lean approach is to aim to have flexible manufacturing through small production lot sizes since this smooths production. Small lot sizes usually increases transportation waste, but this can be eliminated if machines are back-to-back in a workcell. The implementation of workcells can reduce costs by an order of magnitude (90%) [ citation needed ] . In software development , the core of the workcell is the cross-functional team . This team differs from a more traditional waterfall team:
https://en.wikipedia.org/wiki/Workcell
The Worker Protection Standard (WPS) is a United States Environmental Protection Agency (EPA) federal regulation (40 CFR Part 170), intended to protect employees on farms, forests, nurseries, and greenhouses that are occupationally exposed to agricultural pesticides . [ 1 ] Restricted use pesticides control is managed by the EPA under this regulation. It includes the following requirements: [ 2 ] Other organizations and programs related in one way or the other to the administering of and reporting about WPS-based pesticide control include: This agriculture article is a stub . You can help Wikipedia by expanding it . This article relating to law in the United States or its constituent jurisdictions is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Worker_Protection_Standard
The workerbot is a trademark, which was developed by the pi4 robotics GmbH to describe an industrial robot , which was modeled with its possibilities of movement and its sensory abilities of a human . The industrial robot has two arms with seven degrees of freedom. In the arms of force sensors are integrated, enabling the robot to work while the forces occurring measure and similar to humans the gripping process or machining processes to adapt to the forces occurring accordingly. The robot is also equipped with cameras that it can detect its environment and react to it. This industrial robot has been developed within the EU funded project PISA ( Flexible Assembly Systems through Workplace-Sharing and Time-Sharing Human-Machine Cooperation ). The project consortium consists of the lead company pi4_robotics GmbH and the Fraunhofer IPK, the Universidad Politécnica de Madrid and the company EICAS Automazione S.p.A. . The aim is to enable the use of highly flexible industrial robots manufacturing companies within the European Union cost production and to prevent the migration to low-wage countries. The workerbot is still the first operative humanoid factory worker worldwide that can be acquired by purchase. In context with the workerbot there is the first webshop for humanoid robots. [ 1 ]
https://en.wikipedia.org/wiki/Workerbot
A worker–machine activity chart is a chart used to describe or plan the interactions between workers and machines over time. [ 1 ] As the name indicates, the chart deals with the criteria of work elements and their time for both the worker and the machine. This chart is useful to describe any repetitive worker-machine system. A typical worker-machine activity chart consists of two main columns, one for the worker and the other the machine; in some chart formats, there is a third column showing the cumulative time. The chart can also be color-coded to convey information; for example, The time column is used to specify the activity of the worker and the machine, if the column is shaded with black color, it indicates that the worker or the machine is performing an operation, while if it is shaded with gray color, it refers to inspection. For moving, it is customary [ citation needed ] to refer to it with diagonal lines, whereas horizontal lines indicate a holding activity. If the column is blank then the worker or the machine is idle. For some other uses, there is a same version to accommodate enormous worker-machine interactions, called the multiple worker-multiple machine activity chart. The chart can be used to investigate potential process improvements. It can be used to illustrate delays and redundancy , so process improvement efforts can be made to eliminate inefficiencies and identify the activities that can be combined. [ citation needed ]
https://en.wikipedia.org/wiki/Worker–machine_activity_chart
The Working Group on Women in Physics of the International Union of Pure and Applied Physics (IUPAP) was formed by resolution of the Atlanta IUPAP General Assembly in 1999. [ 1 ] The mandate of the group is: To carry out this charge the Working Group has, among other things, organized six International Conferences ( Paris , France, in 2002, Rio de Janeiro , Brazil, in 2005, Seoul , South Korea, in 2008, Stellenbosch , South Africa, in 2011, Waterloo , Canada, in 2014 and Birmingham , UK, in 2017) gathering teams from more than 60 countries that collected data on their local situation of women in physics . It also promoted and collaborated with the elaboration of a global survey of physicists that was carried on by the Statistical Research Center of the American Institute of Physics . [ 2 ] [ 3 ] The IUPAP Working Group is currently involved in the elaboration of a new survey that will include other natural sciences and mathematics within the framework of the Gender Gap in Science Project [ 4 ] funded by the International Council for Science, ICSU. [ 5 ]
https://en.wikipedia.org/wiki/Working_Group_on_Women_in_Physics
For fluid power , a working fluid is a gas or liquid that primarily transfers force , motion , or mechanical energy . In hydraulics , water or hydraulic fluid transfers force between hydraulic components such as hydraulic pumps , hydraulic cylinders , and hydraulic motors that are assembled into hydraulic machinery , hydraulic drive systems , etc. In pneumatics , the working fluid is air or another gas which transfers force between pneumatic components such as compressors , vacuum pumps , pneumatic cylinders , and pneumatic motors . In pneumatic systems, the working gas also stores energy because it is compressible. (Gases also heat up as they are compressed and cool as they expand. Some gases also condense into liquids as they are compressed and boil as pressure is reduced.) For passive heat transfer , a working fluid is a gas or liquid, usually called a coolant or heat transfer fluid, that primarily transfers heat into or out of a region of interest by conduction , convection , and/or forced convection (pumped liquid cooling , air cooling , etc.). The working fluid of a heat engine or heat pump is a gas or liquid, usually called a refrigerant , coolant, or working gas, that primarily converts thermal energy (temperature change) into mechanical energy (or vice versa) by phase change and/or heat of compression and expansion. Examples using phase change include water↔steam in steam engines , and refrigerants in vapor-compression refrigeration and air conditioning systems. Examples without phase change include air or hydrogen in hot air engines such as the Stirling engine , air or gases in gas-cycle heat pumps , etc. (Some heat pumps and heat engines use "working solids", such as rubber bands, for elastocaloric refrigeration or thermoelastic cooling and nickel titanium in a prototype heat engine.) Working fluids other than air or water are necessarily recirculated in a loop. Some hydraulic and passive heat-transfer systems are open to the water supply and/or atmosphere, sometimes through breather filters . Heat engines, heat pumps, and systems using volatile liquids or special gases are usually sealed behind relief valves . The working fluid's properties are essential for the full description of thermodynamic systems. Although working fluids have many physical properties which can be defined, the thermodynamic properties which are often required in engineering design and analysis are few. Pressure , temperature , enthalpy , entropy , specific volume , and internal energy are the most common. If at least two thermodynamic properties are known, the state of the working fluid can be defined. This is usually done on a property diagram which is simply a plot of one property versus another. When the working fluid passes through engineering components such as turbines and compressors , the point on a property diagram moves due to the possible changes of certain properties. In theory therefore it is possible to draw a line/curve which fully describes the thermodynamic properties of the fluid. In reality however this can only be done if the process is reversible . If not, the changes in property are represented as a dotted line on a property diagram. This issue does not really affect thermodynamic analysis since in most cases it is the end states of a process which are sought after. The working fluid can be used to output useful work if used in a turbine . Also, in thermodynamic cycles energy may be input to the working fluid by means of a compressor . The mathematical formulation for this may be quite simple if we consider a cylinder in which a working fluid resides. A piston is used to input useful work to the fluid. From mechanics, the work done from state 1 to state 2 of the process is given by: where ds is the incremental distance from one state to the next and F is the force applied. The negative sign is introduced since in this case a decrease in volume is being considered. The situation is shown in the following figure: The force is given by the product of the pressure in the cylinder and its cross sectional area such that Where A⋅ds = dV is the elemental change of cylinder volume. If from state 1 to 2 the volume increases then the working fluid actually does work on its surroundings and this is commonly denoted by a negative work. If the volume decreases the work is positive. By the definition given with the above integral the work done is represented by the area under a pressure–volume diagram . If we consider the case where we have a constant pressure process then the work is simply given by Depending on the application, various types of working fluids are used. In a thermodynamic cycle it may be the case that the working fluid changes state from gas to liquid or vice versa. Certain gases such as helium can be treated as ideal gases . This is not generally the case for superheated steam and the ideal gas equation does not really hold. At much higher temperatures however it still yields relatively accurate results. The physical and chemical properties of the working fluid are extremely important when designing thermodynamic systems. For instance, in a refrigeration unit, the working fluid is called the refrigerant. Ammonia is a typical refrigerant and may be used as the primary working fluid. Compared with water (which can also be used as a refrigerant), ammonia makes use of relatively high pressures requiring more robust and expensive equipment. In air standard cycles as in gas turbine cycles, the working fluid is air. In the open cycle gas turbine, air enters a compressor where its pressure is increased. The compressor therefore inputs work to the working fluid (positive work). The fluid is then transferred to a combustion chamber where this time heat energy is input by means of the burning of a fuel. The air then expands in a turbine thus doing work against the surroundings (negative work). Different working fluids have different properties and in choosing one in particular the designer must identify the major requirements. In refrigeration units, high latent heats are required to provide large refrigeration capacities. The following table gives typical applications of working fluids and examples for each:
https://en.wikipedia.org/wiki/Working_fluid
Heat engines, refrigeration cycles and heat pumps usually involve a fluid to and from which heat is transferred while undergoing a thermodynamic cycle. This fluid is called the working fluid . [ 1 ] Refrigeration and heat pump technologies often refer to working fluids as refrigerants . Most thermodynamic cycles make use of the latent heat (advantages of phase change) of the working fluid. In case of other cycles the working fluid remains in gaseous phase while undergoing all the processes of the cycle. When it comes to heat engines, working fluid generally undergoes a combustion process as well, for example in internal combustion engines or gas turbines . There are also technologies in heat pump and refrigeration, where working fluid does not change phase , such as reverse Brayton or Stirling cycle. This article summarises the main criteria of selecting working fluids for a thermodynamic cycle , such as heat engines including low grade heat recovery using Organic Rankine Cycle (ORC) for geothermal energy , waste heat , thermal solar energy or biomass and heat pumps and refrigeration cycles . The article addresses how working fluids affect technological applications, where the working fluid undergoes a phase transition and does not remain in its original (mainly gaseous ) phase during all the processes of the thermodynamic cycle. Finding the optimal working fluid for a given purpose – which is essential to achieve higher energy efficiency in the energy conversion systems – has great impact on the technology, namely it does not just influence operational variables of the cycle but also alters the layout and modifies the design of the equipment. Selection criteria of working fluids generally include thermodynamic and physical properties besides economical and environmental factors, but most often all of these criteria are used together. The choice of working fluids is known to have a significant impact on the thermodynamic as well as economic performance of the cycle. A suitable fluid must exhibit favorable physical, chemical, environmental, safety and economic properties such as low specific volume (high density ), viscosity , toxicity , flammability , ozone depletion potential (ODP), global warming potential (GWP) and cost, as well as favorable process characteristics such as high thermal and exergetic efficiency. These requirements apply both to pure (single-component) and mixed (multicomponent) working fluids. Existing research is largely focused on the selection of pure working fluids, with vast number of published reports currently available. An important restriction of pure working fluids is their constant temperature profile during phase change. Working fluid mixtures are more appealing than pure fluids because their evaporation temperature profile is variable, therefore follows the profile of the heat source better, as opposed to the flat (constant) evaporation profile of pure fluids. This enables an approximately stable temperature difference during evaporation in the heat exchanger , coined as temperature glide, which significantly reduces exergetic losses. Despite their usefulness, recent publications addressing the selection of mixed fluids are considerably fewer. [ 2 ] Many authors like for example O. Badr et al. [ 3 ] have suggested the following thermodynamic and physical criteria which a working fluid should meet for heat engines like Rankine cycles. There are some differences in the criteria concerning the working fluids used in heat engines and refrigeration cycles or heat pumps, which are listed below accordingly: Traditional and presently most widespread categorisation of pure working fluids was first used by H. Tabor et al. [ 4 ] and O. Badr et al. [ 3 ] dating back to the 60s. This three-class classification system sorts pure working fluids into three categories. The base of the classification is the shape of the saturation vapour curve of the fluid in temperature-entropy plane . If the slope of the saturation vapour curve in all states is negative (d s /d T <0), which means that with decreasing saturation temperature the value of entropy increases, the fluid is called wet. If the slope of the saturation vapour curve of the fluid is mainly positive (regardless of a short negative slope somewhat below the critical point ), which means that with decreasing saturation temperature the value of entropy also decreases (d T /d s >0), the fluid is dry. The third category is called isentropic , which means constant entropy and refers to those fluids that have a vertical saturation vapour curve (regardless of a short negative slope somewhat below the critical point) in temperature-entropy diagram. According to mathematical approach, it means a (negative) infinite slope (d s /d T =0). The terms of wet, dry and isentropic refer to the quality of vapour after the working fluid undergoes an isentropic ( reversible adiabatic ) expansion process from saturated vapour state. During an isentropic expansion process the working fluid always ends in the two-phase (also called wet) zone, if it is a wet-type fluid. If the fluid is of dry-type, the isentropic expansion necessarily ends in the superheated (also called dry) steam zone. If the working fluid is of isentropic-type, after an isentropic expansion process the fluid stays in saturated vapour state. The quality of vapour is a key factor in choosing steam turbine or expander for heat engines. See figure for better understanding. Traditional classification shows several theoretical and practical deficiencies. One of the most important is the fact that no perfectly isentropic fluid exists. [ 6 ] [ 7 ] Isentropic fluids have two extrema (d s /d T =0) on the saturation vapour curve. Practically, there are some fluids which are very close to this behaviour or at least in a certain temperature range, for example trichlorofluoromethane (CCl 3 F). Another problem is the extent of how dry or isentropic the fluid behaves, which has significant practical importance when designing for example an Organic Rankine Cycle layout and choosing proper expander. A new kind of classification was proposed by G. Györke et al. [ 5 ] to resolve the problems and deficiencies of the traditional three-class classification system. The new classification is also based on the shape of the saturation vapour curve of the fluid in temperature-entropy diagram similarly to the traditional one. The classification uses a characteristic-point based method to differentiate the fluids. The method defines three primary and two secondary characteristic points. The relative location of these points on the temperature-entropy saturation curve defines the categories. Every pure fluid has primary characteristic points A, C and Z: The two secondary characteristic points, namely M and N are defined as local entropy extrema on the saturation vapour curve, more accurately, at those points, where with the decrease of the saturation temperature entropy stays constant: d s /d T =0. We can easily realise that considering traditional classification, wet-type fluids have only primary (A, C and Z), dry-type fluids have primary points and exactly one secondary point (M) and redefined isentropic-type fluids have both primary and secondary points (M and N) as well. See figure for better understanding. The ascending order of entropy values of the characteristic points gives a useful tool to define categories. The mathematically possible number of orderings are 3! (if there are no secondary points), 4! (if only secondary point M exists) and 5! (if both secondary points exist), which makes it 150. There are some physical constraints including the existence of the secondary points decrease the number of possible categories to 8. The categories are to be named after the ascending order of the entropy of their characteristic points. Namely the possible 8 categories are ACZ, ACZM, AZCM, ANZCM, ANCZM, ANCMZ, ACNZM and ACNMZ. The categories (also called sequences) can be fitted into the traditional three-class classification, which makes the two classification system compatible. No working fluids have been found, which could be fitted into ACZM or ACNZM categories. Theoretical studies [ 6 ] [ 7 ] confirmed that these two categories may not even exist. Based on the database of NIST , [ 8 ] the proved 6 sequences of the novel classification and their relation to the traditional one can be seen in the figure. Although multicomponent working fluids have significant thermodynamic advantages over pure (single-component) ones, research and application keep focusing on pure working fluids. However, there are some typical examples for multicomponent based technologies such as Kalina cycle which uses water and ammonia mixture, or absorption refrigerators which also use water and ammonia mixture besides water, ammonia and hydrogen , lithium bromide or lithium chloride mixtures in a majority. Some scientific papers deal with the application of multicomponent working fluids in Organic Rankine cycles as well. These are mainly binary mixtures of hydrocarbons, fluorocarbons , hydrofluorocarbons , siloxanes and inorganic substances. [ 9 ]
https://en.wikipedia.org/wiki/Working_fluid_selection
Working mass , also referred to as reaction mass , is a mass against which a system operates in order to produce acceleration . In the case of a chemical rocket, for example, the reaction mass is the product of the burned fuel shot backwards to provide propulsion. All acceleration requires an exchange of momentum , which can be thought of as the "unit of movement". Momentum is related to mass and velocity, as given by the formula P = mv, where P is the momentum, m the mass, and v the velocity. The velocity of a body is easily changeable, but in most cases the mass is not, which makes it important. In rockets, the total velocity change can be calculated (using the Tsiolkovsky rocket equation ) as follows: Δ v = u ln ⁡ ( m + M M ) {\displaystyle \Delta \,v=u\,\ln \left({\frac {m+M}{M}}\right)} Where: The term working mass is used primarily in the aerospace field. In more "down to earth" examples, the working mass is typically provided by the Earth, which contains so much momentum in comparison to most vehicles that the amount it gains or loses can be ignored. However, in the case of an aircraft the working mass is the air, and in the case of a rocket , it is the rocket fuel itself. Most rocket engines use light-weight fuels (liquid hydrogen , oxygen , or kerosene ) accelerated to supersonic speeds. However, ion engines often use heavier elements like xenon as the reaction mass, accelerated to much higher speeds using electric fields. In many cases, the working mass is separate from the energy used to accelerate it. In a car, the engine provides power to the wheels, which then accelerates the Earth backward to make the car move forward. This is not the case for most rockets, however, where the rocket propellant is the working mass, as well as the energy source. This means that rockets stop accelerating as soon as they run out of fuel, regardless of other power sources they may have. This can be a problem for satellites that need to be repositioned often, as it limits their useful life. In general, the exhaust velocity should be close to the ship velocity for optimum energy efficiency . This limitation of rocket propulsion is one of the main motivations for the ongoing interest in field propulsion technology.
https://en.wikipedia.org/wiki/Working_mass
Each instrument used in analytical chemistry has a useful working range . This is the range of concentration (or mass) that can be adequately determined by the instrument, where the instrument provides a useful signal that can be related to the concentration of the analyte . [ 1 ] All instruments have an upper and a lower working limit. Concentrations below the working limit do not provide enough signal to be useful, and concentrations above the working limit provide too much signal to be useful. When calibrating an instrument for use, the experimenter must be familiar with both the lower and upper working range of the chosen instrument; results obtained from a sample of concentration outside the working range are often statistically uncertain . [ 2 ] This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Working_range
Workshop on Numerical Ranges and Numerical Radii (WONRA) is a biennial workshop series on numerical ranges and numerical radii which began in 1992. Numerical ranges and numerical radii are useful in the study of matrix and operator theory. These topics have applications in many subjects in pure and applied mathematics, such as quadratic forms , Banach spaces , dilation theory , control theory , numerical analysis , quantum information science . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] In the early 1970s, numerical range workshops were organized by Frank Bonsall and John Duncan. More activities were started in early 1990s, including the biennial workshop series, which began in 1992, and special issues devoted to this workshop were published. [ 8 ] [ 9 ] [ 10 ] [ 11 ]
https://en.wikipedia.org/wiki/Workshop_on_Numerical_Ranges_and_Numerical_Radii
The World Association of Theoretical and Computational Chemists ( WATOC ) is a scholarly association founded in 1982 "in order to encourage the development and application of theoretical methods" in chemistry , particularly theoretical chemistry and computational chemistry . [ 1 ] It was originally called the World Association of Theoretical Organic Chemists , [ 1 ] but was later renamed the World Association of Theoretically Oriented Chemists , [ 3 ] and in 2005 renamed once more to the World Association of Theoretical and Computational Chemists . [ 1 ] WATOC organizes a triennial world congress with over 1,000 participants in last years. [ 1 ] [ 4 ] The association awards two yearly medals: the Schrödinger Medal to one "outstanding theoretical and computational chemist", [ 5 ] and the Dirac Medal to one "outstanding theoretical and computational chemist under the age of 40". [ 6 ] Source: WATOC Presidents of WATOC: [ 1 ]
https://en.wikipedia.org/wiki/World_Association_of_Theoretical_and_Computational_Chemists
The World Cell Race is a competition among labs to see which biological cell type can travel 600 microns the fastest. The idea is to promote research into how to make cells move faster to aid immune system response or slow metastatic cancers . A fork with a dead end was added to the course in 2013 to assess responses to growth-factor protein . The race was broadcast live online. [ 1 ] A Dicty World Race "to find the fastest and smartest Dicty cells" took take place on May 16, 2014 in Boston. [ 2 ] This chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/World_Cell_Race
The World Chlorine Council ( WCC ) is an international network of national and regional trade associations representing the chlorine and chlorinated products industries in more than 27 countries. [ 1 ] [ 2 ] Members include chloralkali process associations such as Euro Chlor , Japan Soda Industry Association , Alkali Manufacturers' Association of India , and RusChlor ( Russian Federation ). Members from the product sector include five vinyl producer associations, and the Halogenated Solvents Industry Alliance (United States). [ 3 ] This article about a chemistry organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/World_Chlorine_Council
The World Congress On Information Technology (WCIT) 2019 is an information and communications technology (ICT) event which took place from October 6 to 9, 2019 in Yerevan , Armenia . [ 1 ] The 23rd World Congress on IT featured discussions related to the evolution of the Digital Age . It included sessions on topics ranging from artificial intelligence , virtual reality , smart cities to cybersecurity , climate change , and more. The 2019 World Congress had over 2000 delegates from 70 countries, with over 31 sponsoring organizations. The Congress has been organized since 1978 by the World Information Technology and Services Alliance (WITSA) and takes place every two years in different countries - since 2017 annually. [ 2 ] Pre-Opening Celebration: World's First AI Concert on Republic Square of Yerevan, Armenia , conductor Sergey Smbatyan and special guest Armin Van Buuren Substantive Sessions WCIT 2019 Keynote Address Substantive Sessions Ministerial Session Substantive Sessions Genomics Among the featured speakers were internationally recognized leaders from government and industry.
https://en.wikipedia.org/wiki/World_Congress_on_Information_Technology_(2019)
World Day For Animals In Laboratories ( WDAIL ; also known as World Lab Animal Day ) [ 1 ] is observed every year on 24 April. The surrounding week has come to be known as "World Week for Animals In Laboratories". [ 2 ] The National Anti-Vivisection Society (NAVS) describes the day as an "international day of commemoration" [ 1 ] for animals in laboratories . In 1979, NAVS established World Day for Laboratory Animals (also referred to as Lab Animal Day) on April 24 – Lord Hugh Dowding 's birthday. This international day of commemoration is recognised by the United Nations, and is now marked annually by anti-vivisectionists on every continent. In 1980, People for the Ethical Treatment of Animals (PETA), led by PETA Founder, Ingrid Newkirk , organized the first World Day for Laboratory Animals protest in the U.S. [ 3 ] Today the event is marked by demonstrations and protests by groups opposed to the use of animals in research. [ 4 ] In April 2010 protesters marched through central London calling for an end to the use of animals in research,. [ 5 ] [ 6 ] [ 7 ] A similar march took place in Birmingham in 2012. [ 8 ] and Nottingham in 2014. [ 9 ] World Day and World Week For Animals In Laboratories have also attracted attention from scientific groups defending the use of animals in research. On 22 April 2009 members of UCLA Pro-Test held a rally in support of biomedical research on animals, [ 10 ] and to condemn the violence and harassment directed at faculty member Prof. David Jentsch by animal activists. [ 11 ] [ 12 ] NAVS and other groups opposed to animal research have claimed that World Day For Animals In Laboratories is recognised by the United Nations . [ 1 ] [ 13 ] [ 14 ] However, the day is not included on the official list of United Nations observances. [ 15 ]
https://en.wikipedia.org/wiki/World_Day_for_Laboratory_Animals
The World Energy Engineering Congress (WEEC) is an international energy industry conference and exposition hosted annually by the Association of Energy Engineers . [ 1 ] Professionals in the field of energy engineering from around the world convene annually at the WEEC to discuss energy-related issues and technology such as:
https://en.wikipedia.org/wiki/World_Energy_Engineering_Congress
The World Federation of Engineering Organizations ( French : Federation Mondiale des Organisations d'Ingenieurs ; WFEO ) is an international, non-governmental organization representing the engineering profession worldwide. Founded in 1968 by a group of regional engineering organizations, under the auspices of the United Nations Educational, Scientific and Cultural Organizations ( UNESCO ) in Paris, WFEO is a non governmental international organization that brings together national engineering organizations from over 90 nations and represents some 20 million engineers from around the world. WFEO is part of the United Nations system as an NGO in official relations with UNESCO (associate status) since its foundation, and as taking part in the work of its main bodies, mainly the United Nations Economic and Social Council (ECOSOC) and its specialized agencies, notably the United Nations Industrial Development Organization , the United Nations Environment Programme . At the UN ECOSOC, WFEO co-organizes with the International Science Council the Scientific and Technological Community Major Group. In 2019, based on proposal by WFEO, the UNESCO's General Conference approved the creation of the UNESCO World Engineering Day for Sustainable Development , to be celebrated on 4 March of each year. Since then, WFEO has been coordinating the related celebrations around the world, through its membership and partnering institutions. The governing body of WFEO is the General Assembly. Between meetings of the General Assembly the affairs of the Federation are directed by the Executive Council. The business of the Federation is dealt with by the Executive Board, supported by the Executive Director. Actions the General Assembly, Executive Council, or Executive Board are by majority vote. WFEO's membership comprises a hundred member institutions, including national members representing a country, and international members representing either a global region or continent, or representing a branch of the engineering profession at the global scale. WFEO's main activities in specialized fields of engineering is carried out by its committees, which are hosted by national members for a four-year term. WFEO body meets annually for the General Assembly or Executive Council, and the Committees' meetings. These meetings are framed by a conference, where non affiliated engineers can join. In general these meetings and conferences are held in November. WFEO's President is elected by the General Assembly for a two-years term, in the context of an immediate past president/president/president-elect system.
https://en.wikipedia.org/wiki/World_Federation_of_Engineering_Organizations
The World Geodetic System ( WGS ) is a standard used in cartography , geodesy , and satellite navigation including GPS . The current version, WGS 84 , defines an Earth-centered, Earth-fixed coordinate system and a geodetic datum , and also describes the associated Earth Gravitational Model (EGM) and World Magnetic Model (WMM). The standard is published and maintained by the United States National Geospatial-Intelligence Agency . [ 1 ] Efforts to supplement the various national surveying systems began in the 19th century with F.R. Helmert's book Mathematische und Physikalische Theorien der Physikalischen Geodäsie ( Mathematical and Physical Theories of Physical Geodesy ). Austria and Germany founded the Zentralbüro für die Internationale Erdmessung (Central Bureau of International Geodesy ), and a series of global ellipsoids of the Earth were derived (e.g., Helmert 1906, Hayford 1910 and 1924). A unified geodetic system for the whole world became essential in the 1950s for several reasons: In the late 1950s, the United States Department of Defense , together with scientists of other institutions and countries, began to develop the needed world system to which geodetic data could be referred and compatibility established between the coordinates of widely separated sites of interest. Efforts of the U.S. Army, Navy and Air Force were combined leading to the DoD World Geodetic System 1960 (WGS 60). The term datum as used here refers to a smooth surface somewhat arbitrarily defined as zero elevation, consistent with a set of surveyor's measures of distances between various stations, and differences in elevation, all reduced to a grid of latitudes , longitudes , and elevations . Heritage surveying methods found elevation differences from a local horizontal determined by the spirit level , plumb line , or an equivalent device that depends on the local gravity field (see physical geodesy ). As a result, the elevations in the data are referenced to the geoid , a surface that is not readily found using satellite geodesy . The latter observational method is more suitable for global mapping. Therefore, a motivation, and a substantial problem in the WGS and similar work is to patch together data that were not only made separately, for different regions, but to re-reference the elevations to an ellipsoid model rather than to the geoid . In accomplishing WGS 60, a combination of available surface gravity data, astro-geodetic data and results from HIRAN [ 2 ] and Canadian SHORAN surveys were used to define a best-fitting ellipsoid and an earth-centered orientation for each initially selected datum. (Every datum is relatively oriented with respect to different portions of the geoid by the astro-geodetic methods already described.) The sole contribution of satellite data to the development of WGS 60 was a value for the ellipsoid flattening which was obtained from the nodal motion of a satellite. Prior to WGS 60, the U.S. Army and U.S. Air Force had each developed a world system by using different approaches to the gravimetric datum orientation method. To determine their gravimetric orientation parameters, the Air Force used the mean of the differences between the gravimetric and astro-geodetic deflections and geoid heights (undulations) at specifically selected stations in the areas of the major datums. The Army performed an adjustment to minimize the difference between astro-geodetic and gravimetric geoids . By matching the relative astro-geodetic geoids of the selected datums with an earth-centered gravimetric geoid, the selected datums were reduced to an earth-centered orientation. Since the Army and Air Force systems agreed remarkably well for the NAD, ED and TD areas, they were consolidated and became WGS 60. Improvements to the global system included the Astrogeoid of Irene Fischer and the astronautic Mercury datum. In January 1966, a World Geodetic System Committee composed of representatives from the United States Army, Navy and Air Force was charged with developing an improved WGS, needed to satisfy mapping , charting and geodetic requirements. Additional surface gravity observations, results from the extension of triangulation and trilateration networks, and large amounts of Doppler and optical satellite data had become available since the development of WGS 60. Using the additional data and improved techniques, WGS 66 was produced which served DoD needs for about five years after its implementation in 1967. The defining parameters of the WGS 66 Ellipsoid were the flattening ( 1 ⁄ 298.25 determined from satellite data) and the semimajor axis ( 6 378 145 m determined from a combination of Doppler satellite and astro-geodetic data). A worldwide 5° × 5° mean free air gravity anomaly field provided the basic data for producing the WGS 66 gravimetric geoid. Also, a geoid referenced to the WGS 66 Ellipsoid was derived from available astrogeodetic data to provide a detailed representation of limited land areas. After an extensive effort over a period of approximately three years, the Department of Defense World Geodetic System 1972 was completed. Selected satellite, surface gravity and astrogeodetic data available through 1972 from both DoD and non-DoD sources were used in a Unified WGS Solution (a large scale least squares adjustment). The results of the adjustment consisted of corrections to initial station coordinates and coefficients of the gravitational field. [ 3 ] The largest collection of data ever used for WGS purposes was assembled, processed and applied in the development of WGS 72. Both optical and electronic satellite data were used. The electronic satellite data consisted, in part, of Doppler data provided by the U.S. Navy and cooperating non-DoD satellite tracking stations established in support of the Navy's Navigational Satellite System (NNSS). Doppler data was also available from the numerous sites established by GEOCEIVERS during 1971 and 1972. Doppler data was the primary data source for WGS 72 (see image). Additional electronic satellite data was provided by the SECOR (Sequential Collation of Range) Equatorial Network completed by the U.S. Army in 1970. Optical satellite data from the Worldwide Geometric Satellite Triangulation Program was provided by the BC-4 camera system (see image). Data from the Smithsonian Astrophysical Observatory was also used which included camera ( Baker–Nunn ) and some laser ranging. [ 3 ] The surface gravity field used in the Unified WGS Solution consisted of a set of 410 10° × 10° equal area mean free air gravity anomalies determined solely from terrestrial data. This gravity field includes mean anomaly values compiled directly from observed gravity data wherever the latter was available in sufficient quantity. The value for areas of sparse or no observational data were developed from geophysically compatible gravity approximations using gravity-geophysical correlation techniques. Approximately 45 percent of the 410 mean free air gravity anomaly values were determined directly from observed gravity data. [ 3 ] The astrogeodetic data in its basic form consists of deflection of the vertical components referred to the various national geodetic datums. These deflection values were integrated into astrogeodetic geoid charts referred to these national datums. The geoid heights contributed to the Unified WGS Solution by providing additional and more detailed data for land areas. Conventional ground survey data was included in the solution to enforce a consistent adjustment of the coordinates of neighboring observation sites of the BC-4, SECOR, Doppler and Baker–Nunn systems. Also, eight geodimeter long line precise traverses were included for the purpose of controlling the scale of the solution. [ 3 ] The Unified WGS Solution, as stated above, was a solution for geodetic positions and associated parameters of the gravitational field based on an optimum combination of available data. The WGS 72 ellipsoid parameters, datum shifts and other associated constants were derived separately. For the unified solution, a normal equation matrix was formed based on each of the mentioned data sets. Then, the individual normal equation matrices were combined and the resultant matrix solved to obtain the positions and the parameters. [ 3 ] The value for the semimajor axis ( a ) of the WGS 72 Ellipsoid is 6 378 135 m . The adoption of an a -value 10 meters smaller than that for the WGS 66 Ellipsoid was based on several calculations and indicators including a combination of satellite and surface gravity data for position and gravitational field determinations. Sets of satellite derived station coordinates and gravimetric deflection of the vertical and geoid height data were used to determine local-to-geocentric datum shifts, datum rotation parameters, a datum scale parameter and a value for the semimajor axis of the WGS Ellipsoid. Eight solutions were made with the various sets of input data, both from an investigative point of view and also because of the limited number of unknowns which could be solved for in any individual solution due to computer limitations. Selected Doppler satellite tracking and astro-geodetic datum orientation stations were included in the various solutions. Based on these results and other related studies accomplished by the committee, an a -value of 6 378 135 m and a flattening of 1/298.26 were adopted. [ 3 ] In the development of local-to WGS 72 datum shifts, results from different geodetic disciplines were investigated, analyzed and compared. Those shifts adopted were based primarily on a large number of Doppler TRANET and GEOCEIVER station coordinates which were available worldwide. These coordinates had been determined using the Doppler point positioning method. [ 3 ] In the early 1980s, the need for a new world geodetic system was generally recognized by the geodetic community as well as within the US Department of Defense. WGS 72 no longer provided sufficient data, information, geographic coverage, or product accuracy for all then-current and anticipated applications. The means for producing a new WGS were available in the form of improved data, increased data coverage, new data types and improved techniques. Observations from Doppler, satellite laser ranging and very-long-baseline interferometry (VLBI) constituted significant new information. An outstanding new source of data had become available from satellite radar altimetry. Also available was an advanced least squares method called collocation that allowed for a consistent combination solution from different types of measurements all relative to the Earth's gravity field, measurements such as the geoid, gravity anomalies, deflections, and dynamic Doppler. The new world geodetic system was called WGS 84. It is the reference system used by the Global Positioning System . It is geocentric and globally consistent within 1 m . Current geodetic realizations of the geocentric reference system family International Terrestrial Reference System (ITRS) maintained by the IERS are geocentric, and internally consistent, at the few-cm level, while still being metre-level consistent with WGS 84. The WGS 84 reference ellipsoid was based on GRS 80 , but it contains a very slight variation in the inverse flattening, as it was derived independently and the result was rounded to a different number of significant digits. [ 4 ] This resulted in a tiny difference of 0.105 mm in the semi-minor axis. [ 5 ] The following table compares the primary ellipsoid parameters. The coordinate origin of WGS 84 is meant to be located at the Earth's center of mass ; the uncertainty is believed to be less than 2 cm . [ 7 ] The WGS 84 meridian of zero longitude is the IERS Reference Meridian , [ 8 ] 5.3 arc seconds or 102 metres (335 ft) east of the Greenwich meridian at the latitude of the Royal Observatory . [ 9 ] [ 10 ] (This is related to the fact that the local gravity field at Greenwich does not point exactly through the Earth's center of mass, but rather "misses west" of the center of mass by about 102 meters.) The longitude positions on WGS 84 agree with those on the older North American Datum 1927 at roughly 85° longitude west , in the east-central United States. The WGS 84 datum surface is an oblate spheroid with equatorial radius a = 6 378 137 m at the equator and flattening f = 1 ⁄ 298.257 223 563 . The refined value of the WGS 84 gravitational constant (mass of Earth's atmosphere included) is GM = 3.986 004 418 × 10 14 m 3 /s 2 . The angular velocity of the Earth is defined to be ω = 72.921 15 × 10 −6 rad/s . [ 11 ] This leads to several computed parameters such as the polar semi-minor axis b which equals a × (1 − f ) = 6 356 752 .3142 m , and the first eccentricity squared, e 2 = 6.694 379 990 14 × 10 −3 . [ 11 ] The original standardization document for WGS 84 was Technical Report 8350.2, published in September 1987 by the Defense Mapping Agency (which later became the National Imagery and Mapping Agency). New editions were published in September 1991 and July 1997; the latter edition was amended twice, in January 2000 and June 2004. [ 12 ] The standardization document was revised again and published in July 2014 by the National Geospatial-Intelligence Agency as NGA.STND.0036. [ 13 ] These updates provide refined descriptions of the Earth and realizations of the system for higher precision. The original WGS84 model had an absolute accuracy of 1–2 meters. WGS84 (G730) first incorporated GPS observations, taking the accuracy down to 10 cm/component rms. [ 14 ] All following revisions including WGS84 (G873) and WGS84 (G1150) also used GPS. [ 15 ] WGS 84 (G1762) is the sixth update to the WGS reference frame. [ 14 ] WGS 84 has most recently been updated to use the reference frame G2296 , which was released on 7 January 2024 as an update to G2139, now aligned to both the ITRF2020, the most recent ITRF realization, and the IGS20, the frame used by the International GNSS Service (IGS). [ 16 ] G2139 was aligned with the IGb14 realization of the International Terrestrial Reference Frame (ITRF) 2014 and uses the new IGS Antex standard. [ 17 ] Updates to the original geoid for WGS 84 are now published as a separate Earth Gravitational Model (EGM), with improved resolution and accuracy. Likewise, the World Magnetic Model (WMM) is updated separately. The current version of WGS 84 uses EGM2008 and WMM2020. [ 18 ] [ 19 ] Solution for Earth orientation parameters consistent with ITRF2014 is also needed (IERS EOP 14C04). [ 20 ] Components of WGS 84 are identified by codes in the EPSG Geodetic Parameter Dataset : [ 21 ]
https://en.wikipedia.org/wiki/World_Geodetic_System
The World Geographical Scheme for Recording Plant Distributions ( WGSRPD ) is a biogeographical system developed by the international Biodiversity Information Standards (TDWG) organization, formerly the International Working Group on Taxonomic Databases. [ 1 ] The WGSRPD standards, like other standards for data fields in botanical databases, were developed to promote "the wider and more effective dissemination of information about the world's heritage of biological organisms for the benefit of the world at large". The system provides clear definitions and codes for recording plant distributions at four scales or levels, from "botanical continents" down to parts of large countries. The codes may be referred to as TDWG geographical codes . Current users of the system include the International Union for Conservation of Nature (IUCN), the Germplasm Resources Information Network (GRIN), and Plants of the World Online (POWO). The scheme is one of a number developed by Biodiversity Information Standards particularly aimed at taxonomic databases . [ 2 ] The starting point was the "need for an agreed system of geographical units at approximately 'country' level and upwards for use in recording plant distributions". [ 1 ] The scheme represents a compromise between political and botanical divisions. [ 3 ] All boundaries either follow a political boundary (country boundary, province boundary, etc.), or coastlines. [ 1 ] The scheme also aims to follow botanical tradition, in terms of the distribution categories used in works like the Flora Europaea , Flora Malesiana , or Med-Checklist. [ 4 ] This approach occasionally leads to departures from political boundaries. Thus the scheme follows Flora Europaea [ 5 ] in placing the eastern Aegean islands (such as Lesbos , Samos and Rhodes ) in the West Asia region, [ 6 ] rather than in Europe where they belong politically as part of Greece. The scheme defines geographic places at four scales or levels, from "botanical continents" down to parts of large countries: [ 7 ] Standardized codes are used to represent the units at each level. Numerical codes are used for Levels 1 and 2, alphabetic codes for Levels 3 and 4. For more botanically oriented classifications using phytogeography, the scheme's documentation endorses the use of floristic kingdoms , floristic regions , and floristic provinces , as classified by Armen Takhtajan . [ 10 ] The WGSRPD defines nine botanical continents (Level 1), each assigned a single digit code from 1 (Europe) to 9 (Antarctica). Although it is said that "popular concepts of the continents of the world have been maintained, but with one or two slight modifications", [ 3 ] some of the botanical continents are notably different from the traditional geographical continents . In particular, Asia is divided into two botanical continents; 5 Australasia consists only of Australia and New Zealand and small outlying islands; most of the islands in the Pacific Ocean are allocated to 6 Pacific; and the division of the Americas into 7 Northern America and 8 Southern America differs from the traditional North America and South America . [ 3 ] The botanical continent of Europe is defined broadly in line with Flora Europaea [ 5 ] and with the traditional geographical definition . To the north-west it includes Iceland and Svalbard (Spitsbergen). The southern boundary with Africa encloses most of the Mediterranean islands. The eastern boundary places Crimea and European Russia in Europe, with the border defined by the administrative units. Novaya Zemlya is excluded from Europe. The south-eastern boundary excludes the Caucasus and Turkey east of the Bosporus , as well as the Eastern Aegean Islands and Cyprus , which although geopolitically part of Europe are considered floristically part of Western Asia. [ 11 ] The botanical continent of Africa corresponds closely to the usual geographical definition. It excludes the Sinai Peninsula , politically a part of Egypt , which is placed in region 34 Western Asia. To the west, it includes islands grouped as Macaronesia , comprising the Azores , Madeira , the Canary Islands , the Savage Islands and the Cape Verde islands. To the east, it includes Madagascar and other Indian Ocean islands out as far as the island of Rodrigues . [ 12 ] The geographical continent of Asia is divided into two botanical continents, 3 Asia-Temperate and 4 Asia-Tropical. The reason for the division was described as largely for convenience. [ 3 ] Asia-Temperate borders Europe and Africa; the boundaries are described above. To the south-east, the Indian Subcontinent and the rest of Asia from region 41 Indo-China southwards are placed in Asia-Tropical. [ 13 ] Asia-Tropical forms the second part of the traditional geographical continent of Asia. Its western and northern boundaries are formed by the two regions 40 Indian Subcontinent and 41 Indo-China. The southern boundary separates Asia-Tropical from Australia . The south-eastern boundary was changed between the first edition of 1992 and the second edition of 2001. In the first edition, Asia-Tropical was divided into three regions: 40 Indian Subcontinent, 41 Indo-China and 42 Malesia. The eastern boundary of Malesia was placed between the Bismarck Archipelago and the Solomon Islands Archipelago , which were put into region 60 Southwest Pacific. It was subsequently argued that it made more "floristic sense" to link the Solomon Islands with the Bismarck Archipelago and the island of New Guinea . Accordingly, in the second edition, a new region 43 Papuasia was created within Asia-Tropical, comprising New Guinea, Near Oceania (the Bismarck Archipelago and the Solomon Islands Archipelago), so that Asia-Tropical consists of four regions. [ 14 ] The botanical continent of Australasia, as defined by the WGSRPD, consists only of Australia and New Zealand , plus outlying islands. The name was described as having been "controversial", since it has been used to describe larger areas. [ 15 ] Other definitions may include Indonesia , New Guinea and many Pacific islands, which the WGSRPD divides between 4 Asia-Tropical and 6 Pacific. The WGSRPD groups most islands with a nearby continental landmass, usually the closest but may also make a decision influenced by the floristic similarity (hence the placement of the Azores with Africa and not Europe). The exception is the islands of the central part of the Pacific Ocean , which are placed in a separate botanical continent. The largest of these islands include New Caledonia , Fiji and Hawaii . [ 16 ] The WGSRPD divides the Americas into 7 Northern America and 8 Southern America rather than into the traditional continents of North America and South America . The boundary between Northern America and Southern America was changed from the first edition to the second edition. In the first edition, a south-eastern part of Mexico was included in Southern America, the rest of Mexico being placed in Northern America. This followed the boundary of Mesoamerica in Flora Mesoamericana . However, it proved unpopular, especially with Mexican botanists, so in the second edition, all of Mexico is placed in Northern America, which thus consists of Mexico, the contiguous United States plus Alaska, Canada, and Greenland , together with associated offshore islands. [ 17 ] As noted above , the Americas are divided into 7 Northern America and 8 Southern America rather than into the traditional continents of North America and South America, with the precise boundary between the two having changed between the first and second editions of the WGSRPD. Southern America consists of the Caribbean , the WGSRPD definition of Central America (those countries south of Mexico and north of Colombia ), and the traditional geographical continent of South America, together with some offshore islands, such as the Galapagos . [ 18 ] The Antarctic botanical continent consists of continental Antarctica , plus a number of Subantarctic Islands, including the Falkland Islands , South Georgia and Tristan da Cunha . [ 19 ] The nine botanical continents (Level 1) are each divided into between two and ten Level 2 regions; see the table above. Each region is given a two digit code, the first digit being that of the Level 1 continent to which it belongs. Altogether, there are 52 regions. [ 9 ] Many of the regions are geographical divisions of the continents, e.g. 12 Southwestern Europe, 34 Western Asia or 77 South-Central U.S.A. Others are whole countries within the continents, e.g. 36 China, 79 Mexico or 84 Brazil. [ 9 ] Some less well-known regions include: Levels 3 and 4 are identified by letter codes. Three letter codes are used for Level 3; [ 4 ] e.g. "NWG" stands for New Guinea . [ 24 ] Where the Level 3 area is subdivided into Level 4 "basic recording units", a two letter code is appended; [ 25 ] thus "NWG-IJ" represents Irian Jaya , [ 26 ] the Indonesian part of New Guinea. Where the Level 3 area is not subdivided, "OO" may be added to create a five letter code to show that the Level 4 unit is identical to the Level 3 area. [ 25 ] Thus "BIS" represents the Bismarck Archipelago at Level 3. This area is not subdivided, so "BIS-OO" can be used to represent it at Level 4. [ 23 ] As an example, the complete division of the Level 2 Papuasia region is shown below. 43 Papuasia Organizations and works using the scheme include the International Union for Conservation of Nature (IUCN), [ 27 ] the Germplasm Resources Information Network (GRIN), and the World Checklist of Vascular Plants, which supports Plants of the World Online , published by Kew . [ 28 ] Thus in the GRIN Taxonomy for Plants database, the distribution of Magnolia grandiflora is given in terms of WGSRPD botanical continents and regions as: [ 29 ] Below the Level 2 regions, the Level 3 areas in this case are US states, which are then listed.
https://en.wikipedia.org/wiki/World_Geographical_Scheme_for_Recording_Plant_Distributions
World Immunization Week is a global public health campaign to raise awareness and increase rates of immunization against vaccine-preventable diseases around the world. It takes place each year during the last week of April (24th - 30th). Immunization can protect against 25 different infectious agents or diseases, from infancy to old age, including diphtheria , measles , pertussis , polio , tetanus and COVID-19 . The World Health Organization (WHO) estimates active immunization currently averts 2 to 3 million deaths every year. However, 22.6 million infants worldwide are still missing out on basic vaccines, mostly in developing countries. [ 1 ] Inadequate immunization coverage rates often result from limited resources, competing health priorities, poor management of health systems and inadequate surveillance. The goal of World Immunization Week is to raise public awareness of how immunization saves lives, and support people everywhere to get the vaccinations needed against deadly diseases for themselves and their children. [ citation needed ] World Immunization Week sprung out of the efforts taking place across different countries and regions for a week-long immunization awareness commemoration. World Immunization Week is one of eleven official campaigns marked by the WHO, along with World Health Day , World Blood Donor Day , World No Tobacco Day , World Tuberculosis Day , World Malaria Day , World Patient Safety Day , World Hepatitis Day , World Antimicrobial Awareness Week, World Chagas Disease Day and World AIDS Day . [ 2 ] The World Health Assembly endorsed World Immunization Week during its May 2012 meeting. [ 3 ] Previously, Immunization Week activities were observed on different dates in different regions of the world. Immunization Week was observed simultaneously for the first time in 2012, with the participation of more than 180 countries and territories worldwide. [ 4 ] [ 5 ] Each World Immunization Week focuses on a theme. The themes have included the following: [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/World_Immunization_Week
The 2022 edition of the World Immunization Week was observed from 24 to 30 April 2022. [ 1 ] The World Immunization Week is a global public health campaign for raising awareness for immunization against vaccine preventable diseases. The theme of this year's event is Long Life for All-in pursuit of a long life well lived . [ 1 ] [ 2 ] The official hashtags of the event are #Vaccines4Life and #LongLifeForAll . [ 3 ] [ 4 ] The organizations such as UNICEF , GAVI and Global Polio Eradication Initiative have partnered with the World Health Organization in the 2022 edition of World Immunization Week. [ 5 ]
https://en.wikipedia.org/wiki/World_Immunization_Week_2022
The World Mill (also "heavenly mill", "cosmic mill" and variants) is a mytheme suggested as recurring in Indo-European and other mythologies. It involves the analogy of the cosmos or firmament and a rotating millstone . The mytheme was extensively explored in Viktor Rydberg 's 1886 Investigations into Germanic Mythology , who provides both ancient Scandinavian and Indian examples. [ citation needed ] Donald Mackenzie described the World Mill’s relationship to the sacred spiral and the revolution of the starry heavens, providing analogs in Chinese, Egyptian, Babylonian, and AmerInd folklore, before concluding "that the idea of the World Mill originated as a result of the observation of the seasonal revolutions of the constellation of the 'Great Bear'." [ 1 ] Clive Tolley (1995) examined the significance of the mytheme in Indo-European and Finnish mythology . [ 2 ] Tolley found that "the image of a cosmic mill, ambivalently churning out well-being or disaster, may be recognized in certain fragmentary myths", adding additional Indo-European and Finnish analogs of the mill to the material previously considered by Rydberg and others. Tolley comes to the conclusion that the cosmic mill was not, in extant Norse sources, a widely developed mythologem. Nonetheless, the myth of Mundilfæri connects the turning of the cosmos via a 'mill-handle' with the regulation of seasons, and the myth of Bergelmir suggests the concept of a creative milling of a giant's body, associated in some way with the sea, Richard M. Dorson surveyed the views of 19th-century writers on the World Mill in his 1968 historical review, Peasant Customs and Savage Myths: Selections from the British Folklorists , [ 3 ] and the mytheme is discussed in the Kommentar zu den Liedern der Edda , [ 4 ] in regard to the Eddic poem, Grottasöngr . A similar conception underlies the Eddaic Mundilföri, the giant who makes the heavens turn round in its daily and yearly revolutions by moving (færa) the handle (mundil, möndull) of the great world-mill — that being the Teutonic idea of the revolving vault of heaven.[Rydberg, Teutonic Mythology, 396-7; M. Müller, Contributions to the Science of Mythology, 40, 651] Mundilföri, the axis-mover and heaven-turner, is a solar being who has his children Máni and Sól (i,e, Sun and Moon). As fire-producer by turning, he was identified with Lodhurr, the fire-kindler. [Rydberg, 412; Du Chaillu, Viking Age, i. 38; C.F. Keary, The Vikings, 65. In the Finnish Kalevala the sun is called ‘God’s spindle’ (Grimm, T.M., 1500)].
https://en.wikipedia.org/wiki/World_Mill
The UNESCO World Network of Biosphere Reserves ( WNBR ) covers internationally designated protected areas , known as biosphere or nature reserves , which are meant to demonstrate a balanced relationship between people and nature (e.g. encourage sustainable development ). [ 1 ] They are created under the Man and the Biosphere Programme (MAB). The World Network of Biosphere Reserves (WNBR) of the MAB Programme consists of a dynamic and interactive network of sites. It works to foster the harmonious integration of people and nature for sustainable development through participatory dialogue, knowledge sharing, poverty reduction, human well-being improvements, respect for cultural values and by improving society's ability to cope with climate change . It promotes north–south and South-South collaboration and represents a unique tool for international cooperation through the exchange of experiences and know-how, capacity-building and the promotion of best practices. [ 1 ] As of 2022 [update] [A] total membership had reached 738 biosphere reserves in 134 countries (including 22 transboundary sites) occurring in all regions of the world. [ 1 ] This already takes into account some biosphere reserves that have been withdrawn or revised through the years, as the program's focus has shifted from simple protection of nature to areas displaying close interaction between man and environment. [ citation needed ] In 2023, ten more biospheres were announced. [ 2 ] In 2024, eleven more biospheres were announced; [ 3 ] this brings a total of 759 sites across 136 countries at the end of 2024. Article 4 of the "Statutory Framework of the World Network of Biosphere Reserves" . UNESCO . defines general criteria for an area to be qualified for designation as a biosphere reserve as follows: Article 9 of the Statutory Framework states that "the status of each biosphere reserve should be subject to a periodic review every ten years, based on a report prepared by the concerned authority, on the basis of the criteria of Article 4". [ 6 ] If a biosphere reserve no longer satisfies the criteria contained in Article 4, it may be recommended the state concerned take measures to ensure conformity. Should a biosphere reserve still does not satisfy the criteria contained in Article 4, within a reasonable period, the area will no longer be referred to as a biosphere reserve which is part of the network. [ 6 ] Article 9 of the Statutory Framework gives a state the right to remove a biosphere reserve under its jurisdiction from the network. As of July 2018 [update] , a total of 45 sites had been withdrawn from the World Network of Biosphere Reserves by 9 countries. [ 7 ] Some reserves have been withdrawn after they no longer met newer, stricter criteria for reserves, for example on zonation or area size. [ 8 ] In June 2017, during the International Coordinating Council of the Man and the Biosphere Programme (MAB ICC) meeting in Paris, the United States has withdrawn 17 sites (out of the country's previous total of 47 sites) from the program. [ 9 ]
https://en.wikipedia.org/wiki/World_Network_of_Biosphere_Reserves
The World Nuclear Transport Institute (WNTI) is an international organisation that represents the collective interests of the nuclear power and packaging industries , and those who rely on it for the safe, efficient, and reliable transport of radioactive materials. [ 1 ] [ 2 ] Through the WNTI, companies are working together to promote a sound international framework for the future by helping to build international consensus through co-operation and understanding. [ 3 ] The Institute was founded in 1998. [ 4 ] [ 5 ] The WNTI is a private , non-profit organisation funded by membership subscriptions. [ 6 ] [ 7 ] Member companies are drawn from a wide range of industry sectors including major utilities , fuel producers and fabricators , transport companies, package designers, package producers and mines . [ 8 ] Headquartered in London , the WNTI Secretariat has a small staff of qualified professionals working closely with members and other international bodies involved in the transport of radioactive materials . [ 9 ] The Board of Directors currently comprises seven directors and meets biannually. Headquartered in London, the Institute is managed by the Secretary General . The Secretary General chairs an Advisory Committee which reports to the Board of Directors . The WNTI operates successfully as a network organisation, with regional offices in Tokyo and Washington, D.C. [ 10 ] The WNTI provides: Intergovernmental organisations such as the International Atomic Energy Agency (IAEA) [ 21 ] and the International Maritime Organization (IMO) [ 22 ] play a pivotal role in establishing standards and regulations that apply to radioactive materials transport and it is important that industry views are represented. [ 23 ] Through its non-governmental status, the WNTI supports the work of the key intergovernmental organisations in promoting an efficient, harmonised international transport safety regime. [ 24 ] Exchanges within intergovernmental organisations , with competent authorities and collaboration with related industry organisations such as FORATOM , [ 25 ] the Nuclear Energy Institute (NEI), [ 26 ] the World Nuclear Association (WNA), [ 27 ] the World Institute for Nuclear Security (WINS) [ 28 ] and the International Organization for Standardization (ISO) [ 29 ] are essential and remain a priority for the WNTI. [ 30 ] [ 31 ] The WNTI produces technical and factual information to support a background for balanced policies and regulations. [ 28 ] [ 32 ] Scientific and other academic papers are published regularly and presented to key officials including regulators . [ 33 ] The WNTI public website provides information on nuclear transport including the nuclear fuel cycle , non-fuel cycle transport, regulations, packages and also includes an image library. [ 33 ]
https://en.wikipedia.org/wiki/World_Nuclear_Transport_Institute
The World Ocean Atlas ( WOA ) is a data product of the Ocean Climate Laboratory of the National Centers for Environmental Information ( U.S. ). [ 1 ] The WOA consists of a climatology of fields of in situ ocean properties for the World Ocean . It was first produced in 1994 [ 2 ] (based on the earlier Climatological Atlas of the World Ocean , 1982 [ 3 ] ), with later editions at roughly four year intervals in 1998, 2001, 2005, 2009, 2013, 2018, and 2023. [ 4 ] The World Ocean Atlas (WOA) is based on profile data from the World Ocean Database (WOD) Project. [ 1 ] The fields that make up the WOA dataset consist of objectively -analysed global grids at 1 ° spatial resolution . The fields are three-dimensional , and data are typically interpolated onto 33 standardised vertical intervals [ 5 ] from the surface (0 m) to the abyssal seafloor (5500 m). In terms of temporal resolution, averaged fields are produced for annual, seasonal and monthly time-scales. The WOA fields include ocean temperature , salinity , dissolved oxygen , apparent oxygen utilisation (AOU), percent oxygen saturation , phosphate , silicic acid , and nitrate . Early editions of the WOA additionally included fields such as mixed layer depth and sea surface height . In addition to the averaged fields of ocean properties, the WOA also contains fields of statistical information concerning the constituent data that the averages were produced from. These include fields such as the number of data points the average is derived from, their standard deviation and standard error . A lower horizontal resolution (5°) version of the WOA is also available. The WOA dataset is primarily available as compressed ASCII , but since WOA 2005 a netCDF version has also been produced.
https://en.wikipedia.org/wiki/World_Ocean_Atlas
The World Register of Marine Species ( WoRMS ) is a taxonomic database that aims to provide an authoritative and comprehensive catalogue and list of names of marine organisms . [ 1 ] The content of the registry is edited and maintained by scientific specialists on each group of organism. These taxonomists control the quality of the information, which is gathered from the primary scientific literature as well as from some external regional and taxon-specific databases. WoRMS maintains valid names of all marine organisms, but also provides information on synonyms and invalid names. It is an ongoing task to maintain the registry, since new species are constantly being discovered and described by scientists; in addition, the nomenclature and taxonomy of existing species is often corrected or changed as new research is constantly being published. [ citation needed ] Subsets of WoRMS content are made available, and can have separate badging and their own home/launch pages, as "subregisters", such as the World List of Marine Acanthocephala , World List of Actiniaria , World Amphipoda Database , World Porifera Database , and so on. As of December 2018 there were 60 such taxonomic subregisters, including a number presently under construction. [ 2 ] A second category of subregisters comprises regional species databases such as the African Register of Marine Species , Belgian Register of Marine Species , etc., while a third comprises thematic subsets such as the World Register of Deep-Sea species (WoRDSS) , World Register of Introduced Marine Species (WRiMS) , etc. In all of these cases, the base data are entered and held once only as part of the WoRMS data system for ease of maintenance and data consistency, and are redisplayed as needed in the context of the relevant subregister or subregisters to which they may also belong. [ citation needed ] Certain subregisters expand content beyond the original "marine" concept of WoRMS by including freshwater or terrestrial taxa for completeness in their designated area of interest; such records can be excluded from a standard search of WoRMS by selecting appropriate options in the online search interface. [ citation needed ] WoRMS was founded in 2007 and grew out of the European Register of Marine Species and the UNESCO-IOC Register of Marine Organisms (URMO), which was compiled by Jacob van der Land (and several colleagues) at the National Museum of Natural History, Leiden. [ 3 ] It is primarily funded by the European Union and hosted by the Flanders Marine Institute (VLIZ) in Ostend , Belgium. WoRMS has established formal agreements with several other biodiversity projects, including the Global Biodiversity Information Facility and the Encyclopedia of Life . In 2008, WoRMS stated that it hoped to have an up-to-date record of all marine species completed by 2010, the year in which the Census of Marine Life was completed. [ 4 ] As of March 2025 [update] , WoRMS contained listings for 514,088 marine species names (including synonyms) of which 247,418 are valid marine species (98% checked). Their goal was to have a listing for each of the approximately more than 240,000 marine species. [ 5 ] [ 6 ] VLIZ also hosts the Interim Register of Marine and Nonmarine Genera (IRMNG), using a common infrastructure. [ 7 ] [ 8 ] In 2021, a genus of extinct sea snails was named after the WoRMS database: † Wormsina Harzhauser & Landau, 2021 . [ 9 ]
https://en.wikipedia.org/wiki/World_Register_of_Marine_Species