text
stringlengths
11
320k
source
stringlengths
26
161
In chemistry and fluid mechanics , the volume fraction φ i {\displaystyle \varphi _{i}} is defined as the volume of a constituent V i divided by the volume of all constituents of the mixture V prior to mixing: [ 1 ] Being dimensionless , its unit is 1; it is expressed as a number, e.g., 0.18. It is the same concept as volume percent (vol%) except that the latter is expressed with a denominator of 100, e.g., 18%. The volume fraction coincides with the volume concentration in ideal solutions where the volumes of the constituents are additive (the volume of the solution is equal to the sum of the volumes of its ingredients). The sum of all volume fractions of a mixture is equal to 1: The volume fraction (percentage by volume, vol%) is one way of expressing the composition of a mixture with a dimensionless quantity ; mass fraction (percentage by weight, wt%) and mole fraction (percentage by moles , mol%) are others. Volume percent is the concentration of a certain solute , measured by volume, in a solution . It has as a denominator the volume of the mixture itself, as usual for expressions of concentration, [ 2 ] rather than the total of all the individual components’ volumes prior to mixing: Volume percent is usually used when the solution is made by mixing two fluids , such as liquids or gases . However, percentages are only additive for ideal gases . [ 3 ] The percentage by volume ( vol% , % v/v ) is one way of expressing the composition of a mixture with a dimensionless quantity ; mass fraction (percentage by weight, wt%) and mole fraction (percentage by moles , mol%) are others. In the case of a mixture of ethanol and water, which are miscible in all proportions, the designation of solvent and solute is arbitrary. The volume of such a mixture is slightly less than the sum of the volumes of the components. Thus, by the above definition, the term "40% alcohol by volume" refers to a mixture of 40 volume units of ethanol with enough water to make a final volume of 100 units, rather than a mixture of 40 units of ethanol with 60 units of water. The "enough water" is actually slightly more than 60 volume units, since water-ethanol mixture loses volume due to intermolecular attraction. [ citation needed ] Volume fraction is related to mass fraction , by where ρ i {\displaystyle \rho _{i}} is the constituent density, and ρ m {\displaystyle \rho _{m}} is the mixture density.
https://en.wikipedia.org/wiki/Volume_fraction
Volume solid is the volume of paint after it has dried. This is different than the weight solid. Paint may contain solvent , resin , pigments , and additives. Many paints do not contain any solvent. After applying the paint, the solid portion will be left on the substrate . Volume solid is the term that indicates the solid proportion of the paint on a volume basis. For example, if the paint is applied in a wet film at a 100 μm thickness and the volume solid of paint is 50%, then the dry film thickness (DFT) will be 50 μm as 50% of the wet paint has evaporated. Suppose the volume solid is 100%, and the wet film thickness is also 100 μm. Then after complete drying of the paint, the DFT will be 100 μm because no solvent will be evaporated. [ 1 ] This is an important concept when using paint industrially to calculate the cost of painting. [ 1 ] It can be said that it is the real volume of paint. Here is the formula by which one can calculate the volume solid of paint, A simple method that anyone can do to determine volume solids empirically is to apply paint to a steel surface with an application knife and measure the wet film thickness. Then cure the paint and measure the dry film thickness. The percentage of dry to wet represents the percentage of volume solids. In earlier days, the volume solid was measured by a disc method but now a sophisticated instrument is also available which takes only a drop of paint to check the volume solid. [ citation needed ] Understanding volume solids allows knowing the true cost of different coatings and how much paint is used to perform its function. Generally, more expensive paints have a higher volume of solids and provide better coverage. [ 2 ]
https://en.wikipedia.org/wiki/Volume_solid
Several units of volume are used in petroleum engineering . Due to the risk of confusion with the SI convention where the "M" prefix stands for "mega" representing million, the Society of Petroleum Engineers recommends in their style guide that abbreviations or prefixes M or MM are not used for barrels of oil or barrel of oil equivalent, but rather that thousands, millions or billions are spelled out. [ 1 ] Note that the m³ gas conversion factor takes into account a difference in the standard temperature base for measurement of gas volumes in metric and imperial units. The standard temperature for metric measurement is 15 degrees Celsius (i.e. 59 degrees Fahrenheit) while for English measurement the standard temperature is 60 °F. Gas undergoes a slight expansion when the temperature is raised from 15 °C (59 °F) to 60 °F and this expansion is built into the above factor for gas. The standard temperature and pressure (STP) for gas varies depending on the particular code being used. [ 2 ] It is just as important to know the standard pressure as the temperature. Formerly, OPEC used 101.325 kPa (14.696 psia) but now the standard is 101.560 kPa (14.73 psia).
https://en.wikipedia.org/wiki/Volume_units_used_in_petroleum_engineering
Volume viscosity (also called bulk viscosity, or second viscosity or, dilatational viscosity) is a material property relevant for characterizing fluid flow. Common symbols are ζ , μ ′ , μ b , κ {\displaystyle \zeta ,\mu ',\mu _{\mathrm {b} },\kappa } or ξ {\displaystyle \xi } . It has dimensions (mass / (length × time)), and the corresponding SI unit is the pascal -second (Pa·s). Like other material properties (e.g. density , shear viscosity , and thermal conductivity ) the value of volume viscosity is specific to each fluid and depends additionally on the fluid state, particularly its temperature and pressure . Physically, volume viscosity represents the irreversible resistance, over and above the reversible resistance caused by isentropic bulk modulus , to a compression or expansion of a fluid. [ 1 ] At the molecular level, it stems from the finite time required for energy injected in the system to be distributed among the rotational and vibrational degrees of freedom of molecular motion. [ 2 ] Knowledge of the volume viscosity is important for understanding a variety of fluid phenomena, including sound attenuation in polyatomic gases (e.g. Stokes's law ), propagation of shock waves , and dynamics of liquids containing gas bubbles. In many fluid dynamics problems, however, its effect can be neglected. For instance, it is 0 in a monatomic gas at low density (unless the gas is moderately relativistic [ 3 ] ), whereas in an incompressible flow the volume viscosity is superfluous since it does not appear in the equation of motion. [ 4 ] Volume viscosity was introduced in 1879 by Sir Horace Lamb in his famous work Hydrodynamics . [ 5 ] Although relatively obscure in the scientific literature at large, volume viscosity is discussed in depth in many important works on fluid mechanics, [ 1 ] [ 6 ] [ 7 ] fluid acoustics, [ 8 ] [ 9 ] [ 10 ] [ 2 ] theory of liquids, [ 11 ] [ 12 ] rheology, [ 13 ] and relativistic hydrodynamics. [ 3 ] At thermodynamic equilibrium, the negative-one-third of the trace of the Cauchy stress tensor is often identified with the thermodynamic pressure , which depends only on equilibrium state variables like temperature and density ( equation of state ). In general, the trace of the stress tensor is the sum of thermodynamic pressure contribution and another contribution which is proportional to the divergence of the velocity field. This coefficient of proportionality is called volume viscosity. Common symbols for volume viscosity are ζ {\displaystyle \zeta } and μ v {\displaystyle \mu _{v}} . Volume viscosity appears in the classic Navier-Stokes equation if it is written for compressible fluid , as described in most books on general hydrodynamics [ 6 ] [ 1 ] and acoustics. [ 9 ] [ 10 ] where μ {\displaystyle \mu } is the shear viscosity coefficient and ζ {\displaystyle \zeta } is the volume viscosity coefficient. The parameters μ {\displaystyle \mu } and ζ {\displaystyle \zeta } were originally called the first and bulk viscosity coefficients, respectively. The operator D v / D t {\displaystyle D\mathbf {v} /Dt} is the material derivative . By introducing the tensors (matrices) ϵ {\displaystyle {\boldsymbol {\epsilon }}} , γ {\displaystyle {\boldsymbol {\gamma }}} and e I {\displaystyle e\mathbf {I} } (where e is a scalar called dilation , and I {\displaystyle \mathbf {I} } is the identity tensor ), which describes crude shear flow (i.e. the strain rate tensor ), pure shear flow (i.e. the deviatoric part of the strain rate tensor, i.e. the shear rate tensor [ 14 ] ) and compression flow (i.e. the isotropic dilation tensor), respectively, the classic Navier-Stokes equation gets a lucid form. Note that the term in the momentum equation that contains the volume viscosity disappears for an incompressible flow because there is no divergence of the flow, and so also no flow dilation e to which is proportional: So the incompressible Navier-Stokes equation can be simply written: In fact, note that for the incompressible flow the strain rate is purely deviatoric since there is no dilation ( e =0). In other words, for an incompressible flow the isotropic stress component is simply the pressure: and the deviatoric ( shear ) stress is simply twice the product between the shear viscosity and the strain rate ( Newton's constitutive law ): Therefore, in the incompressible flow the volume viscosity plays no role in the fluid dynamics. However, in a compressible flow there are cases where ζ ≫ μ {\displaystyle \zeta \gg \mu } , which are explained below. In general, moreover, ζ {\displaystyle \zeta } is not just a property of the fluid in the classic thermodynamic sense, but also depends on the process, for example the compression/expansion rate. The same goes for shear viscosity. For a Newtonian fluid the shear viscosity is a pure fluid property, but for a non-Newtonian fluid it is not a pure fluid property due to its dependence on the velocity gradient. Neither shear nor volume viscosity are equilibrium parameters or properties, but transport properties. The velocity gradient and/or compression rate are therefore independent variables together with pressure, temperature, and other state variables . According to Landau , [ 1 ] In compression or expansion, as in any rapid change of state, the fluid ceases to be in thermodynamic equilibrium, and internal processes are set up in it which tend to restore this equilibrium. These processes are usually so rapid (i.e. their relaxation time is so short) that the restoration of equilibrium follows the change in volume almost immediately unless, of course, the rate of change of volume is very large. He later adds: It may happen, nevertheless, that the relaxation times of the processes of restoration of equilibrium are long, i.e. they take place comparatively slowly. After an example, he concludes (with ζ {\displaystyle \zeta } used to represent volume viscosity): Hence, if the relaxation time of these processes is long, a considerable dissipation of energy occurs when the fluid is compressed or expanded, and, since this dissipation must be determined by the second viscosity, we reach the conclusion that ζ {\displaystyle \zeta } is large. A brief review of the techniques available for measuring the volume viscosity of liquids can be found in Dukhin & Goetz [ 10 ] and Sharma (2019). [ 15 ] One such method is by using an acoustic rheometer . Below are values of the volume viscosity for several Newtonian liquids at 25 °C (reported in cP ): [ 16 ] Recent studies have determined the volume viscosity for a variety of gases, including carbon dioxide , methane , and nitrous oxide . These were found to have volume viscosities which were hundreds to thousands of times larger than their shear viscosities. [ 15 ] Fluids having large volume viscosities include those used as working fluids in power systems having non-fossil fuel heat sources, wind tunnel testing, and pharmaceutical processing. There are many publications dedicated to numerical modeling of volume viscosity. A detailed review of these studies can be found in Sharma (2019) [ 15 ] and Cramer. [ 17 ] In the latter study, a number of common fluids were found to have bulk viscosities which were hundreds to thousands of times larger than their shear viscosities. For relativistic liquids and gases, bulk viscosity is conveniently modeled in terms of a mathematical duality with chemically reacting relativistic fluids. [ 3 ]
https://en.wikipedia.org/wiki/Volume_viscosity
A volumetric concrete mixer (also known as volumetric mobile mixer ) is a concrete mixer mounted on a truck or trailer that contains separate compartments for sand, stone, cement and water. On arrival at the job site, the machine mixes the materials to produce the exact amount of concrete needed. Volumetric mixers batch, measure, mix and dispense all from one unit. Volumetric concrete mixers can produce exactly the amount of concrete needed when it is needed at any time. [ 1 ] Some concrete suppliers offer general purpose concrete batched in a volumetric mixer as a practical alternative to ready-mix if quantities and schedules are not fully known, to eliminate waste and prevent premature stiffening of the mix. The volumetric mixer varies in capacity size up to 12 m 3 and has a production rate of around 60m 3 an hour depending on the mix design. Many volumetric concrete mixer manufacturers have innovated the mixer in capacity and design, as well as added features including color, multiple admixes, fiber systems, and the ability to do gunite or shotcrete. The advantages of a volumetric mixer include: In the mid-1960's, companies such as Cemen Tech, Reimer Mixers (manufactured under the name ProAll circa 2016), and Zimmerman began building their own versions of volumetric concrete mixers . In 1999, equipment manufacturers created a trade association , Volumetric Mixer Manufacturers Bureau (VMMB). It had six charter members: Cemen Tech, Inc., Zimmerman Ind, Inc., ProAll Reimer, Bay-Lynx, Custom-Crete, and Elkin. Currently its members include (in alphabetical order): Bay-Lynx, Cemen Tech, Holcombe Mixers, ProAll Reimer Mixers, and Zimmerman Ind, Inc. [ 8 ]
https://en.wikipedia.org/wiki/Volumetric_concrete_mixer
Volumetric efficiency (VE) in internal combustion engine engineering is defined as the ratio of the equivalent volume of the fresh air drawn into the cylinder during the intake stroke (if the gases were at the reference condition for density) to the volume of the cylinder itself. The term is also used in other engineering contexts, such as hydraulic pumps and electronic components. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Volumetric Efficiency in an internal combustion engine design refers to the efficiency with which the engine can move the charge of fresh air into and out of the cylinders . It also denotes the ratio of equivalent air volume drawn into the cylinder to the cylinder's swept volume. [ 5 ] This equivalent volume is commonly inserted into a mass estimation equation based upon Boyle's Gas Law . When VE is multiplied by the cylinder volume, an accurate estimate of cylinder air mass (charge) can be made for use in determining the required fuel delivery and spark timing for the engine. The flow restrictions in the intake and exhaust systems create a reduction in the inlet flow which reduces the total mass delivery to the cylinder. Under some conditions, ram tuning may either increase or decrease the pumping efficiency of the engine. This happens when a favorable alignment of the pressure wave in the inlet (or exhaust) plumbing improves the flow through the valve. Increasing the pressure differential across the inlet valve typically increases VE throughout the naturally aspirated range. Adding intake manifold boost pressure from a supercharger or turbocharger can increase the VE, but the final calculation for cylinder airmass takes most of this benefit into account with the pressure term in n=PV/RT which is taken from the intake manifold pressure. Many high performance cars use carefully arranged air intakes and tuned exhaust systems that use pressure waves to push air into and out of the cylinders, making use of the resonance of the system. Two-stroke engines are very sensitive to this concept and can use expansion chambers that return the escaping air-fuel mixture back to the cylinder. A more modern technique for four-stroke engines , variable valve timing , attempts to address changes in volumetric efficiency with changes in speed of the engine: at higher speeds the engine needs the valves open for a greater percentage of the cycle time to move the charge in and out of the engine. Volumetric efficiencies above 100% can be reached by using forced induction such as supercharging or turbocharging . With proper tuning, volumetric efficiencies above 100% can also be reached by naturally aspirated engines . The limit for naturally aspirated engines is about 130%; [ 6 ] these engines are typically of a DOHC layout with four valves per cylinder . This process is called inertial supercharging and uses the resonance of the intake manifold and the mass of the air to achieve pressures greater than atmospheric at the intake valve. With proper tuning (and dependent on the need for sound level control), VE's of up to 130% have been reported in various experimental studies. [ 7 ] More "radical" solutions include the sleeve valve design, in which the valves are replaced outright with a rotating sleeve around the piston, or alternately a rotating sleeve under the cylinder head. In this system the ports can be as large as necessary, up to that of the entire cylinder wall. However, there is a practical upper limit due to the strength of the sleeve, at larger sizes the pressure inside the cylinder can "pop" the sleeve if the port is too large. Volumetric efficiency in a hydraulic pump refers to the percentage of actual fluid flow out of the pump compared to the flow out of the pump without leakage. In other words, if the flow out of a 100cc pump is 92cc (per revolution), then the volumetric efficiency is 92%. The volumetric efficiency will change with the pressure and speed a pump is operated at, therefore when comparing volumetric efficiencies, the pressure and speed information must be available. When a single number is given for volumetric efficiency, it will typically be at the rated pressure and speed. In electronics, volumetric efficiency quantifies the performance of some electronic function per unit volume [ of what? ] , usually in as small a space as possible. This is desirable since advanced designs need to cram increasing functionality into smaller packages, for example, maximizing the energy stored in a battery powering a cellphone. Besides energy storage in batteries, the concept of volumetric efficiency appears in design and application of capacitors , where the "CV product" is a figure of merit calculated by multiplying the capacitance (C) by the maximum voltage rating (V), divided by the volume. The concept of volumetric efficiency can be applied to any measurable electronic characteristic, including resistance , capacitance , inductance , voltage , current , energy storage , etc.
https://en.wikipedia.org/wiki/Volumetric_efficiency
In physics and engineering , in particular fluid dynamics , the volumetric flow rate (also known as volume flow rate , or volume velocity ) is the volume of fluid which passes per unit time; usually it is represented by the symbol Q (sometimes V ˙ {\displaystyle {\dot {V}}} ). It contrasts with mass flow rate , which is the other main type of fluid flow rate. In most contexts a mention of rate of fluid flow is likely to refer to the volumetric rate. In hydrometry , the volumetric flow rate is known as discharge . Volumetric flow rate should not be confused with volumetric flux , as defined by Darcy's law and represented by the symbol q , with units of m 3 /(m 2 ·s), that is, m·s −1 . The integration of a flux over an area gives the volumetric flow rate. The SI unit is cubic metres per second (m 3 /s). Another unit used is standard cubic centimetres per minute (SCCM). In US customary units and imperial units , volumetric flow rate is often expressed as cubic feet per second (ft 3 /s) or gallons per minute (either US or imperial definitions). In oceanography , the sverdrup (symbol: Sv, not to be confused with the sievert ) is a non- SI metric unit of flow, with 1 Sv equal to 1 million cubic metres per second (35,000,000 cu ft/s); [ 1 ] [ 2 ] it is equivalent to the SI derived unit cubic hectometer per second (symbol: hm 3 /s or hm 3 ⋅s −1 ). Named after Harald Sverdrup , it is used almost exclusively in oceanography to measure the volumetric rate of transport of ocean currents . Volumetric flow rate is defined by the limit [ 3 ] that is, the flow of volume of fluid V through a surface per unit time t . Since this is only the time derivative of volume, a scalar quantity, the volumetric flow rate is also a scalar quantity. The change in volume is the amount that flows after crossing the boundary for some time duration, not simply the initial amount of volume at the boundary minus the final amount at the boundary, since the change in volume flowing through the area would be zero for steady flow. IUPAC [ 4 ] prefers the notation q v {\displaystyle q_{v}} [ 5 ] and q m {\displaystyle q_{m}} [ 6 ] for volumetric flow and mass flow respectively, to distinguish from the notation Q {\displaystyle Q} [ 7 ] for heat. Volumetric flow rate can also be defined by where The above equation is only true for uniform or homogeneous flow velocity and a flat or planar cross section. In general, including spatially variable or non-homogeneous flow velocity and curved surfaces, the equation becomes a surface integral : This is the definition used in practice. The area required to calculate the volumetric flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface. The vector area is a combination of the magnitude of the area through which the volume passes through, A , and a unit vector normal to the area, n ^ {\displaystyle {\hat {\mathbf {n} }}} . The relation is A = A n ^ {\displaystyle \mathbf {A} =A{\hat {\mathbf {n} }}} . The reason for the dot product is as follows. The only volume flowing through the cross-section is the amount normal to the area, that is, parallel to the unit normal. This amount is where θ is the angle between the unit normal n ^ {\displaystyle {\hat {\mathbf {n} }}} and the velocity vector v of the substance elements. The amount passing through the cross-section is reduced by the factor cos θ . As θ increases less volume passes through. Substance which passes tangential to the area, that is perpendicular to the unit normal, does not pass through the area. This occurs when θ = ⁠ π / 2 ⁠ and so this amount of the volumetric flow rate is zero: These results are equivalent to the dot product between velocity and the normal direction to the area. When the mass flow rate is known, and the density can be assumed constant, this is an easy way to get Q {\displaystyle Q} : where In internal combustion engines, the time area integral is considered over the range of valve opening. The time lift integral is given by where T is the time per revolution, R is the distance from the camshaft centreline to the cam tip, r is the radius of the camshaft (that is, R − r is the maximum lift), θ 1 is the angle where opening begins, and θ 2 is where the valve closes (seconds, mm, radians). This has to be factored by the width (circumference) of the valve throat. The answer is usually related to the cylinder's swept volume.
https://en.wikipedia.org/wiki/Volumetric_flow_rate
In fluid dynamics , the volumetric flux is the rate of volume flow across a unit area (m 3 ·s −1 ·m −2 ), and has dimensions of distance/time (volume/(time*area)) - equivalent to mean velocity. The density of a particular property in a fluid's volume, multiplied with the volumetric flux of the fluid, thus defines the advective flux of that property. [ 1 ] The volumetric flux through a porous medium is called superficial velocity and it is often modelled using Darcy's law . Volumetric flux is not to be confused with volumetric flow rate , which is the volume of fluid that passes through a given surface per unit of time (as opposed to a unit surface). This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Volumetric_flux
The volumetric heat capacity of a material is the heat capacity of a sample of the substance divided by the volume of the sample. It is the amount of energy that must be added, in the form of heat , to one unit of volume of the material in order to cause an increase of one unit in its temperature . The SI unit of volumetric heat capacity is joule per kelvin per cubic meter , J⋅K −1 ⋅m −3 . The volumetric heat capacity can also be expressed as the specific heat capacity (heat capacity per unit of mass, in J⋅K −1 ⋅ kg −1 ) times the density of the substance (in kg/ L , or g / mL ). [ 1 ] It is defined to serve as an intensive property . This quantity may be convenient for materials that are commonly measured by volume rather than mass, as is often the case in engineering and other technical disciplines. The volumetric heat capacity often varies with temperature, and is different for each state of matter . While the substance is undergoing a phase transition , such as melting or boiling, its volumetric heat capacity is technically infinite , because the heat goes into changing its state rather than raising its temperature. The volumetric heat capacity of a substance, especially a gas, may be significantly higher when it is allowed to expand as it is heated (volumetric heat capacity at constant pressure ) than when is heated in a closed vessel that prevents expansion (volumetric heat capacity at constant volume ). If the amount of substance is taken to be the number of moles in the sample (as is sometimes done in chemistry), one gets the molar heat capacity (whose SI unit is joule per kelvin per mole, J⋅K −1 ⋅mol −1 ). The volumetric heat capacity is defined as where V ( T ) {\displaystyle V(T)} is the volume of the sample at temperature T {\displaystyle T} , and Δ Q ( T ) {\displaystyle \Delta Q(T)} is the amount of heat energy needed to raise the temperature of the sample from T {\displaystyle T} to T + Δ T {\displaystyle T+\Delta T} . This parameter is an intensive property of the substance. Since both the heat capacity of an object and its volume may vary with temperature, in unrelated ways, the volumetric heat capacity is usually a function of temperature too. It is equal to the specific heat c ( T ) {\displaystyle c(T)} of the substance times its density (mass per volume) ρ ( T ) {\displaystyle \rho (T)} , both measured at the temperature T {\displaystyle T} . Its SI unit is joule per kelvin per cubic meter (J⋅K −1 ⋅m −3 ). This quantity is used almost exclusively for liquids and solids, since for gases it may be confused with the "specific heat capacity at constant volume", which generally has very different values. International standards now recommend that "specific heat capacity" always refer to capacity per unit of mass. [ 2 ] Therefore, the word "volumetric" should always be used for this quantity. Dulong and Petit predicted in 1818 [ citation needed ] that the product of solid substance density and specific heat capacity (ρc p ) would be constant for all solids. This amounted to a prediction that volumetric heat capacity in solids would be constant. In 1819 they found that volumetric heat capacities were not quite constant, but that the most constant quantity was the heat capacity of solids adjusted by the presumed weight of the atoms of the substance, as defined by Dalton (the Dulong–Petit law ). This quantity was proportional to the heat capacity per atomic weight (or per molar mass ), which suggested that it is the heat capacity per atom (not per unit of volume) which is closest to being a constant in solids. Eventually it became clear that heat capacities per particle for all substances in all states are the same, to within a factor of two, so long as temperatures are not in the cryogenic range. The volumetric heat capacity of solid materials at room temperatures and above varies widely, from about 1.2 MJ⋅K −1 ⋅m −3 (for example bismuth [ 3 ] ) to 3.4 MJ⋅K −1 ⋅m −3 (for example iron [ 4 ] ). This is mostly due to differences in the physical size of atoms. Atoms vary greatly in density, with the heaviest often being more dense, and thus are closer to taking up the same average volume in solids than their mass alone would predict. If all atoms were the same size, molar and volumetric heat capacity would be proportional and differ by only a single constant reflecting ratios of the atomic molar volume of materials (their atomic density). An additional factor for all types of specific heat capacities (including molar specific heats) then further reflects degrees of freedom available to the atoms composing the substance, at various temperatures. For most liquids, the volumetric heat capacity is narrower, for example octane at 1.64 MJ⋅K −1 ⋅m −3 or ethanol at 1.9. This reflects the modest loss of degrees of freedom for particles in liquids as compared with solids. However, water has a very high volumetric heat capacity, at 4.18 MJ⋅K −1 ⋅m −3 , and ammonia is also fairly high: 3.3 MJ⋅K −1 ⋅m −3 . For gases at room temperature, the range of volumetric heat capacities per atom (not per molecule) only varies between different gases by a small factor less than two, because every ideal gas has the same molar volume . Thus, each gas molecule occupies the same mean volume in all ideal gases, regardless of the type of gas (see kinetic theory ). This fact gives each gas molecule the same effective "volume" in all ideal gases (although this volume/molecule in gases is far larger than molecules occupy on average in solids or liquids). Thus, in the limit of ideal gas behavior (which many gases approximate except at low temperatures and/or extremes of pressure) this property reduces differences in gas volumetric heat capacity to simple differences in the heat capacities of individual molecules. As noted, these differ by a factor depending on the degrees of freedom available to particles within the molecules. Large complex gas molecules may have high heat capacities per mole (of molecules), but their heat capacities per mole of atoms are very similar to those of liquids and solids, again differing by less than a factor of two per mole of atoms. This factor of two represents vibrational degrees of freedom available in solids vs. gas molecules of various complexities. In monatomic gases (like argon) at room temperature and constant volume, volumetric heat capacities are all very close to 0.5 kJ⋅K −1 ⋅m −3 , which is the same as the theoretical value of ⁠ 3 / 2 ⁠ RT per kelvin per mole of gas molecules (where R is the gas constant and T is temperature). As noted, the much lower values for gas heat capacity in terms of volume as compared with solids (although more comparable per mole, see below) results mostly from the fact that gases under standard conditions consist of mostly empty space (about 99.9% of volume), which is not filled by the atomic volumes of the atoms in the gas. Since the molar volume of gases is very roughly 1000 times that of solids and liquids, this results in a factor of about 1000 loss in volumetric heat capacity for gases, as compared with liquids and solids. Monatomic gas heat capacities per atom (not per molecule) are decreased by a factor of 2 with regard to solids, due to loss of half of the potential degrees of freedom per atom for storing energy in a monatomic gas, as compared with regard to an ideal solid. There is some difference in the heat capacity of monatomic vs. polyatomic gasses, and also gas heat capacity is temperature-dependent in many ranges for polyatomic gases; these factors act to modestly (up to the discussed factor of 2) increase heat capacity per atom in polyatomic gases, as compared with monatomic gases. Volumetric heat capacities in polyatomic gases vary widely, however, since they are dependent largely on the number of atoms per molecule in the gas, which in turn determines the total number of atoms per volume in the gas. The volumetric heat capacity is defined as having SI units of J /( m 3 ⋅ K ). It can also be described in Imperial units of BTU /( ft 3 ⋅ °F ). Since the bulk density of a solid chemical element is strongly related to its molar mass (usually about 3 R per mole, as noted above), there exists noticeable inverse correlation between a solid's density and its specific heat capacity on a per-mass basis. This is due to a very approximate tendency of atoms of most elements to be about the same size, despite much wider variations in density and atomic weight. These two factors (constancy of atomic volume and constancy of mole-specific heat capacity) result in a good correlation between the volume of any given solid chemical element and its total heat capacity. Another way of stating this, is that the volume-specific heat capacity (volumetric heat capacity) of solid elements is roughly a constant. The molar volume of solid elements is very roughly constant, and (even more reliably) so also is the molar heat capacity for most solid substances. These two factors determine the volumetric heat capacity, which as a bulk property may be striking in consistency. For example, the element uranium is a metal which has a density almost 36 times that of the metal lithium, but uranium's volumetric heat capacity is only about 20% larger than lithium's. Since the volume-specific corollary of the Dulong–Petit specific heat capacity relationship requires that atoms of all elements take up (on average) the same volume in solids, there are many departures from it, with most of these due to variations in atomic size. For instance, arsenic , which is only 14.5% less dense than antimony , has nearly 59% more specific heat capacity on a mass basis. In other words; even though an ingot of arsenic is only about 17% larger than an antimony one of the same mass, it absorbs about 59% more heat for a given temperature rise. The heat capacity ratios of the two substances closely follows the ratios of their molar volumes (the ratios of numbers of atoms in the same volume of each substance); the departure from the correlation to simple volumes in this case is due to lighter arsenic atoms being significantly more closely packed than antimony atoms, instead of similar size. In other words, similar-sized atoms would cause a mole of arsenic to be 63% larger than a mole of antimony, with a correspondingly lower density, allowing its volume to more closely mirror its heat capacity behavior. The volumetric heat capacity of liquids could be measured from the thermal conductivity and thermal diffusivity correlation. The volumetric heat capacity of liquids could be directly obtained during thermal conductivity analysis using thermal conductivity analyzers that use techniques like the transient plane source method. [ 5 ] For gases it is necessary to distinguish between volumetric heat capacity at constant volume and volumetric heat capacity at constant pressure , which is always larger due to the pressure–volume work done as a gas expands during heating at constant pressure (thus absorbing heat which is converted to work). The distinctions between constant-volume and constant-pressure heat capacities are also made in various types of specific heat capacity (the latter meaning either mass-specific or mole-specific heat capacity). Thermal inertia is a term commonly used to describe the observed delays in a body's temperature response during heat transfers . The phenomenon exists because of a body's ability to both store and transport heat relative to its environment. Larger values of volumetric heat capacity, as may occur in association with thermal effusivity , typically yield slower temperature responses.
https://en.wikipedia.org/wiki/Volumetric_heat_capacity
A volumetric pipette , bulb pipette , or belly pipette [ 1 ] allows extremely accurate measurement (to four significant figures) of the volume of a solution. [ 2 ] It is calibrated to deliver accurately a fixed volume of liquid. These pipettes have a large bulb with a long narrow portion above with a single graduation mark as it is calibrated for a single volume (like a volumetric flask ). Typical volumes are 1, 2, 5, 10, 20, 25, 50 and 100 mL. Volumetric pipettes are commonly used in analytical chemistry to make laboratory solutions from a base stock as well as to prepare solutions for titration . ASTM standard E969 defines the standard tolerance for volumetric transfer pipettes. The tolerance depends on the size: a 0.5-mL pipette has a tolerance of ±0.006 mL, while a 50-mL pipette has a tolerance of ±0.05 mL. (These are for Class A pipettes; Class B pipettes are given a tolerance of twice that for the corresponding Class A.) A specialized example of a volumetric pipette is the microfluid pipette (capable of dispensing as little as 10 μL) designed with a circulating liquid tip that generates a self-confining volume in front of its outlet channels. [ 3 ] Pyrex started to make laboratory equipment in 1916 and became a favorite brand for the scientific community due to the borosilicate glass 's natural properties. These included strength against; chemicals, thermal shift, and mechanical stress. [ 4 ] This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Volumetric_pipette
Voluntary Protection Programs ( VPP ) is an Occupational Safety and Health Administration (OSHA) initiative that encourages private industry and federal agencies to prevent workplace injuries and illnesses through hazard prevention and control, worksite analysis, training; and cooperation between management and workers. VPP enlists worker involvement to achieve injury and illness rates that are below national Bureau of Labor Statistics averages for their respective industries. [ 1 ] Even though the original OSH Act of 1970 included language that discussed the concept of VPP, it didn't start until an experimental California program began in 1979. The OSHA program started in 1982 with the first approved facilities. [ 2 ] [ 3 ] VPP offers two levels of certification: Star is the highest level. It recognizes employers and employees for developing and implementing continuous improvement workplace safety and health management programs that result in injury/illness rates that are below the national averages for their industries. Merit is for employers and employees that have implemented good safety and health programs but require additional improvements. They must also commit to seeking to advance to Star level within three years. VPP offers three types of certification: Site-based Star and Merit certifications are offered for permanent work sites and long-term construction sites. It may also be used to certify resident contractors at participating VPP sites or under a corporate program. This type of certification is for companies whose employees work on location at various sites. Large organizations that implement organization-wide health and safety management programs that extend to its individual sites are able to seek corporate VPP certification. As of 10/31/2012 2,370 entities were registered as VPP certified with the vast majority achieving the Star level. [ 4 ] All organizations are re-evaluated every three to five years to remain in the programs.
https://en.wikipedia.org/wiki/Voluntary_Protection_Program
In gardening and agronomic terminology, a volunteer is a plant that grows on its own, rather than being deliberately planted by a farmer or gardener . [ 1 ] The action of such plants — to sprout or grow in this fashion — may also be described as volunteering . [ 2 ] Volunteers often grow from seeds that float in on the wind , are dropped by birds , or are inadvertently mixed into compost . Some volunteers may be encouraged by gardeners once they appear, being watered, fertilized, or otherwise cared for, unlike weeds , which are unwanted volunteers. Volunteers that grow from the seeds of specific cultivars are not reliably identical or similar to their parent and often differ significantly from it. Such open pollinated plants, if they show desirable characteristics, may be selected to become new cultivars. In agricultural rotations , self-set plants from the previous year's crop may become established as weeds in the current crop. For example, volunteer winter wheat will germinate to quite high levels in a following oilseed rape crop, usually requiring chemical control measures. In agricultural research, the high purity of a harvested crop is often desirable. To achieve this, typically a group of temporary workers will walk the crop rows looking for volunteer plants, or "rogue" plants in an exercise typically referred to as " roguing ". This agriculture article is a stub . You can help Wikipedia by expanding it . This horticulture article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Volunteer_(botany)
A volute is a curved funnel that increases in area as it approaches the discharge port. [ 1 ] The volute of a centrifugal pump is the casing that receives the fluid being pumped by the impeller , maintaining the velocity of the fluid through to the diffuser. As liquid exits the impeller it has high kinetic energy and the volute directs this flow through to the discharge. As the fluid travels along the volute it is joined by more and more fluid exiting the impeller but, as the cross sectional area of the volute increases, the velocity is maintained if the pump is running close to the design point. If the pump has a low flow rate then the velocity will decrease across the volute leading to a pressure rise causing a cross thrust across the impeller that we see as vibration. If the pump flow is higher than design the velocity will increase across the volute and the pressure will decrease according to the first law of thermodynamics. This will cause a side thrust in the opposite direction to that caused by low flow but the result is the same – vibration with resultant short bearing and seal life. The volute does not convert kinetic energy into pressure – that is done at the diffuser by reducing liquid velocity while increasing pressure. [ citation needed ] The name "volute" is inspired by the resemblance of this kind of casing to the scroll-like part near the top of an ionic order column in classical architecture , called a volute . In a split volute or double volute pump, the path along the volute is partitioned, providing two distinct discharge paths. The streams start out 180 degrees from each other, and merge by the time they reach the discharge port. This arrangement helps to balance the radial force on the bearings. [ 2 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Volute_(pump)
Volvic is a brand of mineral water . Its source is in the Chaîne des Puys -Limagne Fault, Auvergne Volcanoes Regional Park , [ 1 ] at the Puy de Dôme [ 2 ] in France . [ citation needed ] Over 50% of the production of Volvic water is exported to more than sixty countries throughout the world. Two bottling plants produce over one billion bottles of water annually and are the principal employers of the local Volvic commune . The first of the springs in the area was tapped in 1922, and the first bottles appeared on the market in 1938. In October 1993, the Volvic company was bought by Groupe Danone . Since 1997, Volvic has been using PETE , a recyclable material, to make their bottles. [ 3 ] The company became carbon-neutral during 2020. [ 4 ] During the same year Volvic and the esports organisation Berlin International Gaming commenced a partnership. [ 5 ] Volvic also produces a range of water that has natural fruit flavouring named Volvic Touch of Fruit, with sugar free options. Recent flavours include strawberry, summer fruits, orange & peach, cherry, and lemon & lime. Other ranges available are Volvic Juiced (water with fruit juice from concentrate ), and Volvic Sparkling (sparkling flavoured water similar to Touch of Fruit). A 2006 study found that drinking Volvic could reduce the levels of aluminium in the bodies of people with Alzheimer's disease . There is a link between human exposure to aluminium and the incidence of Alzheimer's disease. [ 7 ]
https://en.wikipedia.org/wiki/Volvic_(mineral_water)
The Volyn biota are fossilized microorganisms found in rock samples from miarolitic cavities of igneous rocks collected in Zhytomyr Oblast , Ukraine. It is within the historical region of Volyn , hence the name of the find. Exceptionally well-preserved, they were dated to 1.5 Ga , within the " Boring Billion " period of the Proterozoic geological eon . [ 1 ] [ 2 ] The samples of Volyn biota were found in samples from miarolitic pegmatites ("chamber pegmatites") collected from the Korosten Pluton [ uk ] of the Ukrainian Shield . They were described as early as in 1987, but interpreted as abiogenic formations. [ 2 ] In 2000, these formations were reinterpreted as the fossilized cyanobacteria from geyser -type deposits. [ 3 ] Until very recently the origin of the Korosten pegmatites was not fully understood, but they were dated to 1.8-1.7 Ga. [ 4 ] Franz et al. (2022, 2023), investigating newly recovered samples they date to 1.5 Ga, described the morphology and the internal structure of Volyn biota and reported the presence of different types of filaments, of varying diameters, shapes and branching in the studied organisms, and provided evidence of the presence of fungi -like organisms and Precambrian continental deep biosphere . Some fossils give evidence of sessility , while others of free-living lifestyle. [ 1 ] [ 2 ] Usually Precambrian fossils are not well preserved, but the Volyn biota had exceptional conditions for fossilization in cavities with silicon tetrafluoride -rich fluids. The cavities also preserved them from further diagenetic - metamorphic overprint . [ 2 ] Volyn biota is an additional support [ 2 ] of the claim that filamentous fossils dated to 2.4 Ga from the Ongeluk Formation ( Griqualand West , South Africa) were also fungi-like organisms. [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Volyn_biota
Vomiting agents are chemical weapon agents causing vomiting . Prolonged exposure can be lethal . They were used for the first time during WWI . [ 1 ] [ 2 ] [ 3 ] This article related to weaponry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Vomiting_agent
Vomocytosis (sometimes called non-lytic expulsion ) is the cellular process by phagocytes expel live organisms that they have engulfed without destroying the organism. Vomocytosis is one of many methods used by cells to expel internal materials into their external environment, yet it is distinct in that both the engulfed organism and host cell remain undamaged by expulsion. As engulfed organisms are released without being destroyed, vomocytosis has been hypothesized to be utilized by pathogens as an escape mechanism from the immune system . The exact mechanisms, as well as the repertoire of cells that utilize this mechanism, are currently unknown, yet interest in this unique cellular process is driving continued research with the hopes of elucidating these unknowns. Vomocytosis was first reported in 2006 [ 1 ] [ 2 ] by two groups, working simultaneously in the UK and the US, based on time-lapse microscopy footage characterising the interaction between macrophages and the human fungal pathogen Cryptococcus neoformans . Subsequently, this process has also been seen with other fungal pathogens such as Candida albicans [ 3 ] and Candida krusei . [ 4 ] It has also been speculated [ 5 ] that the process may be related to the expulsion of bacterial pathogens such as Mycobacterium marinum [ 6 ] from host cells. Vomocytosis has been observed in phagocytic cells from mice, humans and birds, [ 7 ] as well as being directly observed in zebrafish [ 8 ] and indirectly detected (via flow cytometry ) in mice. [ 9 ] Amoebae exhibit a similar process to vomocytosis whereby phagosomal material that cannot be digested is exocytosed. Cryptococci are exocytosed from amoebae via this mechanism but inhibition of the constitutive pathway demonstrated that cryptococci could also be expelled via vomocytosis. [ 10 ] A full understanding of the mechanisms involved in vomocytosis is not currently known, yet advances in research have driven initial mechanistic descriptions and crucial steps involved in the process. Research has shown vomocytosis does not occur when pathogens are dead or when engulfed materials are non-living, indicating the survival of phagosomal cargo may be crucial for triggering or enhancing vomocytosis. [ 11 ] [ 12 ] Additionally, the phagosomal pH may play important roles in vomocytosis efficacy as research has demonstrated vomocytosis rates drop as phagocytes become more acidic and vomocytosis is increased by the addition of weak bases to phagocytes. [ 11 ] [ 12 ] The membrane composition and cellular state are implicated in vomocytosis as vomocytosis has been shown to decrease with membrane permiability and increase in states of autophagy . [ 11 ] Furthermore, inflammatory signals such as Type I interferons , which are produced in response to viral infections , are known to enhance vomocytosis. [ 12 ] [ 13 ] [ 14 ] The impacts of these described forces on inducing vomocytosis are still being elaborated, and it is likely that they are variable based on other unknown external and internal factors. Just as in standard exocytosis, rearrangements of the actin cytoskeleton within the host cell are crucial for allowing vomocytosis to occur. [ 15 ] In contrast to standard exocytosis, the engulfed pathogen is not lysed by internal components of the host cell, and the vesicle is brought close to the cellular membrane where it can fuse and release the pathogen cargo. [ 11 ] Annexin A2 , a membrane-bound protein, helps regulate vomocytosis and promote the fusing of vesicles to the plasma membranes. [ 11 ] [ 12 ] In annexin A2 deficient cell lines, rates of vomocytosis were decreased. [ 11 ] Furthermore, screens of macrophage kinase inhibitors revealed signaling pathways linked to vomocytosis. [ 16 ] ERK5 , involved in the MAPK signaling pathway that communicates surface signals to cellular DNA, was shown to suppress vomocytosis. [ 16 ] Additional signaling pathways involved in vomocytosis have yet to be determined. Furthermore, different morphologies of vomocytosis have been documented [ 17 ] and it is possible that the underlying cellular mechanism may vary between them. Research has been devoted to understanding the mechanisms and importance of vomocytosis as it is hypothesized to be linked to many significant biological processes. Vomocytosis plays a role in lateral transfer, a process by which cells transfer engulfed cargo to a neighboring recipient cell, as initial cells expel their cargo undamaged so they can be uptaken by recipient cells. [ 11 ] Additionally, vomocytosis is hypothesized to be utilized as an escape mechanism by pathogens as it allows them to evade degradation by macrophages. [ 11 ] [ 12 ] Since there is no damage to host cells or pathogens during vomocytosis, the immune system is not triggered, which allows for further potential evasion from hosts. More research is necessary to determine whether vomocytosis is initiated by engulfed pathogens for this purpose or by host cells and this is simply an unintentional benefit to pathogens. An additional hypothesis is that vomocytosis may enhance pathogenesis or spread of a pathogen as they are engulfed by macrophages and later expelled in locations that may potentially be different from the site of acute infection. [ 11 ] Enhancing our understanding of host-pathogen interactions will clarify our understanding of vomocytosis's role in infection progression. Lastly, vomocytosis has been implicated in tumor response as tumor-associated macrophages (TAMs) are speculated to be able to modulate the tumor microenvironment (TME) via vomocytosis. [ 18 ] Better understanding the mechanisms of inducing and regulating vomocytosis will enhance our knowledge of host-pathogen and host-self interactions, allowing for advances in our ability to respond to infections and tumors.
https://en.wikipedia.org/wiki/Vomocytosis
In developmental biology , von Baer's laws of embryology (or laws of development ) are four rules proposed by Karl Ernst von Baer to explain the observed pattern of embryonic development in different species . [ 1 ] von Baer formulated the laws in his book On the Developmental History of Animals ( German : Über Entwickelungsgeschichte der Thiere ), published in 1828, while working at the University of Königsberg . He specifically intended to rebut Johann Friedrich Meckel 's 1808 recapitulation theory . According to that theory, embryos pass through successive stages that represent the adult forms of less complex organisms in the course of development, and that ultimately reflects scala naturae (the great chain of being ). [ 2 ] von Baer believed that such linear development is impossible. He posited that instead of linear progression, embryos started from one or a few basic forms that are similar in different animals, and then developed in a branching pattern into increasingly different organisms. Defending his ideas, he was also opposed to Charles Darwin 's 1859 theory of common ancestry and descent with modification , and particularly to Ernst Haeckel 's revised recapitulation theory with its slogan " ontogeny recapitulates phylogeny ". [ 3 ] [ 4 ] Darwin was however broadly supportive of von Baer's view of the relationship between embryology and evolution. Von Baer described his laws in his book Über Entwickelungsgeschichte der Thiere. Beobachtung und Reflexion published in 1828. [ 5 ] They are a series of statements generally summarised into four points, as translated by Thomas Henry Huxley in his Scientific Memoirs : [ 6 ] Von Baer discovered the blastula (the early hollow ball stage of an embryo) and the development of the notochord (the stiffening rod along the back of all chordates , that forms after the blastula and gastrula stages). From his observations of these stages in different vertebrates, he realised that Johann Friedrich Meckel 's recapitulation theory must be wrong. For example, he noticed that the yolk sac is found in birds , but not in frogs . According to the recapitulation theory, such structures should invariably be present in frogs because they were assumed to be at a lower level in the evolutionary tree. Von Baer concluded that while structures like the notochord are recapitulated during embryogenesis, whole organisms are not. [ 7 ] He asserted that (as translated): The embryo successively adds the organs that characterize the animal classes in the ascending scale. When the human embryo, for instance, is but a simple vesicle, it is an infusorian; when it has gained a liver, it is a mussel; with the appearance of the osseous system, it enters the class of fishes; and so forth, until it becomes a mammal and then a human being. [ 8 ] In terms of taxonomic hierarchy, according to von Baer, characters in the embryo are formed in top-to-bottom sequence, first from those of the largest and oldest taxon, the phylum , then in turn class, order, family, genus, and finally species. [ 7 ] The laws received a mixed appreciation. While they were criticised in detail, they formed the foundation of modern embryology . [ 1 ] The most important supporter of von Baer's laws was Charles Darwin . Darwin came across von Baer's laws from the work of Johannes Peter Müller in 1842, and realised that it was a support for his own theory of descent with modification . [ 9 ] Darwin was a critique of the recapitulation theory and agreed with von Baer that an adult animal is not reflected by an embryo of another animal, and only embryos of different animals appear similar. [ 10 ] He wrote in his Origin of Species (first edition, 1859): [The] adult [animal] differs from its embryo, owing to variations supervening at a not early age, and being inherited at a corresponding age. This process, whilst it leaves the embryo almost unaltered, continually adds, in the course of successive generations, more and more difference to the adult. Thus the embryo comes to be left as a sort of picture, preserved by nature, of the ancient and less modified condition of each animal. This view may be true, and yet it may never be capable of full proof. [ 11 ] Darwin also said: It has already been casually remarked that certain organs in the individual, which when mature become widely different and serve for different purposes, are in the embryo exactly alike. The embryos, also, of distinct animals within the same class are often strikingly similar: a better proof of this cannot be given, than a circumstance mentioned by Agassiz, namely, that having forgotten to ticket the embryo of some vertebrate animal, he cannot now tell whether it be that of a mammal, bird, or reptile. [ 12 ] Darwin's attribution to Louis Agassiz was a mistake, [ 13 ] and was corrected in the third edition as von Baer. [ 14 ] He further explained in the later editions of Origin of Species (from third to sixth editions), and wrote: It might be thought that the amount of change which the various parts and organs [of vertebrates] undergo in their development from the embryo to maturity would suffice as a standard of comparison; but there are cases, as with certain parasitic crustaceans, in which several parts of the structure become less perfect, so that the mature animal cannot be called higher than its larva. Von Baer's standard seems the most widely applicable and the best, namely, the amount of differentiation of the different parts (in the adult state, as I should be inclined to add) and their specialisation for different functions. [ 15 ] [ 16 ] Even so, von Baer was a vociferous anti-Darwinist, although he believed in the common ancestry of species. [ 17 ] Devoting much of his scholarly effort to criticising natural selection, his criticism culminated with his last work Über Darwins Lehre (" On Darwin's Doctrine "), published in the year of his death in 1876. [ 18 ] The British zoologist Adam Sedgwick studied the developing embryos of dogfish and chicken , and in 1894 noted a series of differences, such as the green yolk in the dogfish and yellow yolk in the chicken, absence of embryonic rim in chick embryos, absence of blastopore in dogfish, and differences in the gill slits and gill clefts. He concluded: There is no stage of development in which the unaided eye would fail to distinguish between them with ease... A blind man could distinguish between them. [ 19 ] Modern biologists still debate the validity of the laws. In one line of argument, it is said that although every detail of von Baer's law may not work, the basic assumption that early developmental stages of animals are highly conserved is a biological fact. [ 20 ] But an opposition says that there are conserved genetic conditions in embryos, but not the genetic events that govern the development. [ 21 ] One example on the problem of von Baer's law is the formation of notochord before heart. This is due to the fact that heart is present in many invertebrates, which never have notochord. [ 22 ]
https://en.wikipedia.org/wiki/Von_Baer's_laws_(embryology)
This organic chemistry article is a stub . You can help Wikipedia by expanding it . In organic chemistry , the von Baeyer nomenclature is a system for describing polycyclic (i.e. multi- ringed ) hydrocarbons . The system was originally developed in 1900 by German chemist Adolf von Baeyer for bicyclic systems [ 1 ] and in 1913 expanded by Eduard Buchner and Wilhelm Weigand for tricyclic systems. [ 2 ] The system has been adopted and extended by the IUPAC as part of its nomenclature for organic chemistry . The modern version has been extended to cover more cases of compounds including an arbitrary number of cycles, heterocyclic compounds and unsaturated compounds . [ 3 ] This chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Von_Baeyer_nomenclature
The von Braun amide degradation is the chemical reaction of a monosubstituted amide with phosphorus pentachloride or thionyl chloride to give a nitrile and an organo halide . [ 1 ] It is named after Julius Jacob von Braun, who first reported the reaction. [ 2 ] [ 3 ] The secondary amide 1 reacts via its enolized form with phosphorus pentachloride to form the oxonium ion 2 . This produces a chloride ion which deprotonates the oxonium ion to form and imine 3 and hydrogen chloride. These then react with one another to form an amine, with loss of the phosphorus chloride residue. The β-chloroimine 4 is unstable and undergoes internal elimination to a form a nitrilium cation 5 which is cleaved by attack by chloride to form a nitrile 6a and a haloalkane 6b .
https://en.wikipedia.org/wiki/Von_Braun_amide_degradation
The von Braun reaction is an organic reaction in which a tertiary amine reacts with cyanogen bromide to an organocyanamide . [ 1 ] An example is the reaction of N , N -dimethyl-1-naphthylamine : [ 2 ] These days, most chemist have replaced cyanogen bromide reagent with chloroethyl chloroformate reagent instead. It appears as though Olofson et al. was the first chemist to have reported this. [ 3 ] The reaction mechanism consists of two nucleophilic substitutions : the amine is the first nucleophile displacing the bromine atom which then acts as the second nucleophile. [ 4 ] [ 5 ] In following the mechanism is described using trimethylamine as example: [ 6 ] First, the trimethylamine reacts with the cyanogen bromide to form a quaternary ammonium salt, which in the next step reacts by splitting off bromomethane to give the dimethylcyanamide. This is a bimolecular nucleophilic substitution ( S N 2 ).
https://en.wikipedia.org/wiki/Von_Braun_reaction
Von Kármán swirling flow is a flow created by a uniformly rotating infinitely long plane disk, named after Theodore von Kármán who solved the problem in 1921. [ 1 ] The rotating disk acts as a fluid pump and is used as a model for centrifugal fans or compressors. This flow is classified under the category of steady flows in which vorticity generated at a solid surface is prevented from diffusing far away by an opposing convection, the other examples being the Blasius boundary layer with suction, stagnation point flow etc. Consider a planar disk of infinite radius rotating at a constant angular velocity Ω {\displaystyle \Omega } in fluid which is initially at rest everywhere. Near to the surface, the fluid is being turned by the disk, due to friction, which then causes centrifugal forces which move the fluid outwards. This outward radial motion of the fluid near the disk must be accompanied by an inward axial motion of the fluid towards the disk to conserve mass. Theodore von Kármán [ 1 ] noticed that the governing equations and the boundary conditions allow a solution such that u / r , v / r {\displaystyle u/r,v/r} and w {\displaystyle w} are functions of z {\displaystyle z} only, where ( u , v , w ) {\displaystyle (u,v,w)} are the velocity components in cylindrical ( r , θ , z ) {\displaystyle (r,\theta ,z)} coordinate with r = 0 {\displaystyle r=0} being the axis of rotation and z = 0 {\displaystyle z=0} represents the plane disk. Due to symmetry, pressure of the fluid can depend only on radial and axial coordinate p = p ( r , z ) {\displaystyle p=p(r,z)} . Then the continuity equation and the incompressible Navier–Stokes equations reduce to where ν {\displaystyle \nu } is the kinematic viscosity. Since there is no rotation at large z → ∞ {\displaystyle z\rightarrow \infty } , p ( r , z ) {\displaystyle p(r,z)} becomes independent of r {\displaystyle r} resulting in p = p ( z ) {\displaystyle p=p(z)} . Hence f ( r ) = constant {\displaystyle f(r)={\text{constant}}} and ∂ p / ∂ r = 0 {\displaystyle \partial p/\partial r=0} . Here the boundary conditions for the fluid z > 0 {\displaystyle z>0} are Self-similar solution is obtained by introducing following transformation, [ 2 ] where ρ {\displaystyle \rho } is the fluid density. The self-similar equations are with boundary conditions for the fluid η > 0 {\displaystyle \eta >0} are The coupled ordinary differential equations need to be solved numerically and an accurate solution is given by Cochran. [ 3 ] The inflow axial velocity at infinity obtained from the numerical integration is w = − 0.886 ν Ω {\displaystyle w=-0.886{\sqrt {\nu \Omega }}} , so the total outflowing volume flux across a cylindrical surface of radius r {\displaystyle r} is 0.886 π r 2 ν Ω {\displaystyle 0.886\pi r^{2}{\sqrt {\nu \Omega }}} . The tangential stress on the disk is σ z φ = μ ( ∂ v / ∂ z ) z = 0 = ρ ν Ω 3 r G ′ ( 0 ) {\displaystyle \sigma _{z\varphi }=\mu (\partial v/\partial z)_{z=0}=\rho {\sqrt {\nu \Omega ^{3}}}rG'(0)} . Neglecting edge effects, the torque exerted by the fluid on the disk with large ( R ≫ ν / Ω {\displaystyle R\gg {\sqrt {\nu /\Omega }}} ) but finite radius R {\displaystyle R} is The factor 2 {\displaystyle 2} is added to account for both sides of the disk. From numerical solution, torque is given by T = − 1.94 R 4 ρ ν Ω 3 {\displaystyle T=-1.94R^{4}\rho {\sqrt {\nu \Omega ^{3}}}} . The torque predicted by the theory is in excellent agreement with the experiment on large disks up to the Reynolds number of about R e = R 2 Ω / ν = 3 × 10 5 {\displaystyle Re=R^{2}\Omega /\nu =3\times 10^{5}} , the flow becomes turbulent at high Reynolds number. [ 4 ] This problem was addressed by George Keith Batchelor (1951). [ 5 ] Let Γ {\displaystyle \Gamma } be the angular velocity at infinity. Now the pressure at z → ∞ {\displaystyle z\rightarrow \infty } is 1 2 ρ Γ 2 r 2 {\displaystyle {\frac {1}{2}}\rho \Gamma ^{2}r^{2}} . Hence f ( r ) = 1 2 ρ Γ 2 r 2 {\displaystyle f(r)={\frac {1}{2}}\rho \Gamma ^{2}r^{2}} and ∂ p / ∂ r = Γ 2 {\displaystyle \partial p/\partial r=\Gamma ^{2}} . Then the boundary conditions for the fluid z > 0 {\displaystyle z>0} are Self-similar solution is obtained by introducing following transformation, The self-similar equations are with boundary conditions for the fluid η > 0 {\displaystyle \eta >0} is The solution is easy to obtain only for γ > 0 {\displaystyle \gamma >0} i.e., the fluid at infinity rotates in the same sense as the plate. For γ < 0 {\displaystyle \gamma <0} , the solution is more complex, in the sense that many-solution branches occur. Evans(1969) [ 6 ] obtained solution for the range − 1.35 < γ < − 0.61 {\displaystyle -1.35<\gamma <-0.61} . Zandbergen and Dijkstra [ 7 ] [ 8 ] showed that the solution exhibits a square root singularity as γ − → − 0.16053876 {\displaystyle \gamma ^{-}\rightarrow -0.16053876} and found a second-solution branch merging with the solution found for γ → − 0.16053876 {\displaystyle \gamma \rightarrow -0.16053876} . The solution of the second branch is continued till γ − → 0.07452563 {\displaystyle \gamma ^{-}\rightarrow 0.07452563} , at which point, a third-solution branch is found to emerge. They also discovered an infinity of solution branches around the point γ − → 0 {\displaystyle \gamma ^{-}\rightarrow 0} . Bodoyni(1975) [ 9 ] calculated solutions for large negative γ {\displaystyle \gamma } , showed that the solution breaks down at γ − = − 1.436 {\displaystyle \gamma ^{-}=-1.436} . If the rotating plate is allowed to have uniform suction velocity at the plate, then meaningful solution can be obtained for γ ≤ − 0.2 {\displaystyle \gamma \leq -0.2} . [ 4 ] For 0 < γ < ∞ , γ ≠ 1 {\displaystyle 0<\gamma <\infty ,\ \gamma \neq 1} ( γ = 1 {\displaystyle \gamma =1} represents solid body rotation, the whole fluid rotates at the same speed) the solution reaches the solid body rotation at infinity in an oscillating manner from the plate. The axial velocity is negative w < 0 {\displaystyle w<0} for 0 ≤ γ < 1 {\displaystyle 0\leq \gamma <1} and positive w > 0 {\displaystyle w>0} for 1 < γ < ∞ {\displaystyle 1<\gamma <\infty } . There is an explicit solution when | γ − 1 | ≪ 1 {\displaystyle |\gamma -1|\ll 1} . Since both boundary conditions for G {\displaystyle G} are almost equal to one, one would expect the solution for G {\displaystyle G} to slightly deviate from unity. The corresponding scales for F {\displaystyle F} and H {\displaystyle H} can be derived from the self-similar equations. Therefore, To the first order approximation(neglecting F ^ 2 , G ^ 2 , H ^ 2 {\displaystyle {\hat {F}}^{2},{\hat {G}}^{2},{\hat {H}}^{2}} ), the self-similar equation [ 10 ] becomes with exact solutions These solution are similar to an Ekman layer [ 10 ] solution. The flow accepts a non-axisymmetric solution with axisymmetric boundary conditions discovered by Hewitt, Duck and Foster. [ 12 ] Defining η = Ω ν z , γ = Γ Ω , u = r Ω [ f ′ ( η ) + ϕ ( η ) cos ⁡ 2 θ ] , v = r Ω [ g ( η ) − ϕ ( η ) sin ⁡ 2 θ ] , w = − 2 ν Ω f ( η ) , {\displaystyle \eta ={\sqrt {\frac {\Omega }{\nu }}}z,\quad \gamma ={\frac {\Gamma }{\Omega }},\quad u=r\Omega [f'(\eta )+\phi (\eta )\cos 2\theta ],\quad v=r\Omega [g(\eta )-\phi (\eta )\sin 2\theta ],\quad w=-2{\sqrt {\nu \Omega }}f(\eta ),} and the governing equations are with boundary conditions The solution is found to exist from numerical integration for − 0.14485 ≤ γ ≤ 0 {\displaystyle -0.14485\leq \gamma \leq 0} . Bödewadt flow describes the flow when a stationary disk is placed in a rotating fluid. [ 13 ] This problem was addressed by George Keith Batchelor (1951), [ 5 ] Keith Stewartson (1952) [ 14 ] and many other researchers. Here the solution is not simple, because of the additional length scale imposed in the problem i.e., the distance h {\displaystyle h} between the two disks. In addition, the uniqueness and existence of a steady solution are also depend on the corresponding Reynolds number R e = Ω h 2 / ν {\displaystyle Re=\Omega h^{2}/\nu } . Then the boundary conditions for the fluid 0 < z < h {\displaystyle 0<z<h} are In terms of η {\displaystyle \eta } , the upper wall location is simply η = Ω / ν h = R e 1 / 2 {\displaystyle \eta ={\sqrt {\Omega /\nu }}h=Re^{1/2}} . Thus, instead of the scalings used before, it is convenient to introduce following transformation, so that the governing equations become with six boundary conditions and the pressure is given by Here boundary conditions are six because pressure is not known either at the top or bottom wall; λ {\displaystyle \lambda } is to be obtained as part of solution. For large Reynolds number R e ≫ 1 {\displaystyle Re\gg 1} , Batchelor argued that the fluid in the core would rotate at a constant velocity, flanked by two boundary layers at each disk for γ ≥ 0 {\displaystyle \gamma \geq 0} and there would be two uniform counter-rotating flow of thickness ξ = 1 / 2 {\displaystyle \xi =1/2} for γ = − 1 {\displaystyle \gamma =-1} . However, Stewartson predicted that for γ = 0 , − 1 {\displaystyle \gamma =0,-1} the fluid in the core would not rotate at R e ≫ 1 {\displaystyle Re\gg 1} , but just left with two boundary layers at each disk. It turns out, Stewartson predictions were correct (see Stewartson layer ). There is also an exact solution if the two disks are rotating about different axes but for γ = 1 {\displaystyle \gamma =1} . Von Kármán swirling flow finds its applications in wide range of fields, which includes rotating machines, filtering systems, computer storage devices, heat transfer and mass transfer applications, combustion-related problems, [ 15 ] planetary formations, geophysical applications etc.
https://en.wikipedia.org/wiki/Von_Kármán_swirling_flow
In continuum mechanics , the maximum distortion energy criterion (also von Mises yield criterion [ 1 ] ) states that yielding of a ductile material begins when the second invariant of deviatoric stress J 2 {\displaystyle J_{2}} reaches a critical value. [ 2 ] It is a part of plasticity theory that mostly applies to ductile materials, such as some metals . Prior to yield , material response can be assumed to be of a linear elastic , nonlinear elastic , or viscoelastic behavior. In materials science and engineering , the von Mises yield criterion is also formulated in terms of the von Mises stress or equivalent tensile stress , σ v {\displaystyle \sigma _{\text{v}}} . This is a scalar value of stress that can be computed from the Cauchy stress tensor . In this case, a material is said to start yielding when the von Mises stress reaches a value known as yield strength , σ y {\displaystyle \sigma _{\text{y}}} . The von Mises stress is used to predict yielding of materials under complex loading from the results of uniaxial tensile tests . The von Mises stress satisfies the property where two stress states with equal distortion energy have an equal von Mises stress. Because the von Mises yield criterion is independent of the first stress invariant , I 1 {\displaystyle I_{1}} , it is applicable for the analysis of plastic deformation for ductile materials such as metals, as onset of yield for these materials does not depend on the hydrostatic component of the stress tensor . Although it has been believed it was formulated by James Clerk Maxwell in 1865, Maxwell only described the general conditions in a letter to William Thomson (Lord Kelvin). [ 3 ] Richard Edler von Mises rigorously formulated it in 1913. [ 2 ] [ 4 ] Tytus Maksymilian Huber (1904), in a paper written in Polish, anticipated to some extent this criterion by properly relying on the distortion strain energy, not on the total strain energy as his predecessors. [ 5 ] [ 6 ] [ 7 ] Heinrich Hencky formulated the same criterion as von Mises independently in 1924. [ 8 ] For the above reasons this criterion is also referred to as the "Maxwell–Huber–Hencky–von Mises theory". Mathematically the von Mises yield criterion is expressed as: Here k {\displaystyle k} is yield stress of the material in pure shear. As shown later in this article, at the onset of yielding, the magnitude of the shear yield stress in pure shear is √3 times lower than the tensile yield stress in the case of simple tension. Thus, we have: where σ y {\displaystyle \sigma _{y}} is tensile yield strength of the material. If we set the von Mises stress equal to the yield strength and combine the above equations, the von Mises yield criterion is written as: or Substituting J 2 {\displaystyle J_{2}} with the Cauchy stress tensor components, we get where s {\displaystyle s} is called deviatoric stress. This equation defines the yield surface as a circular cylinder (See Figure) whose yield curve, or intersection with the deviatoric plane, is a circle with radius 2 k {\displaystyle {\sqrt {2}}k} , or 2 3 σ y {\textstyle {\sqrt {\frac {2}{3}}}\sigma _{y}} . This implies that the yield condition is independent of hydrostatic stresses. In the case of uniaxial stress or simple tension, σ 1 ≠ 0 , σ 3 = σ 2 = 0 {\displaystyle \sigma _{1}\neq 0,\sigma _{3}=\sigma _{2}=0} , the von Mises criterion simply reduces to which means the material starts to yield when σ 1 {\displaystyle \sigma _{1}} reaches the yield strength of the material σ y {\displaystyle \sigma _{\text{y}}} , in agreement with the definition of tensile (or compressive) yield strength. An equivalent tensile stress or equivalent von-Mises stress, σ v {\displaystyle \sigma _{\text{v}}} is used to predict yielding of materials under multiaxial loading conditions using results from simple uniaxial tensile tests. Thus, we define where s i j {\displaystyle s_{ij}} are components of stress deviator tensor σ dev {\displaystyle {\boldsymbol {\sigma }}^{\text{dev}}} : In this case, yielding occurs when the equivalent stress, σ v {\displaystyle \sigma _{\text{v}}} , reaches the yield strength of the material in simple tension, σ y {\displaystyle \sigma _{\text{y}}} . As an example, the stress state of a steel beam in compression differs from the stress state of a steel axle under torsion, even if both specimens are of the same material. In view of the stress tensor, which fully describes the stress state, this difference manifests in six degrees of freedom , because the stress tensor has six independent components. Therefore, it is difficult to tell which of the two specimens is closer to the yield point or has even reached it. However, by means of the von Mises yield criterion, which depends solely on the value of the scalar von Mises stress, i.e., one degree of freedom, this comparison is straightforward: A larger von Mises value implies that the material is closer to the yield point. In the case of pure shear stress , σ 12 = σ 21 ≠ 0 {\displaystyle \sigma _{12}=\sigma _{21}\neq 0} , while all other σ i j = 0 {\displaystyle \sigma _{ij}=0} , von Mises criterion becomes: This means that, at the onset of yielding, the magnitude of the shear stress in pure shear is 3 {\displaystyle {\sqrt {3}}} times lower than the yield stress in the case of simple tension. The von Mises yield criterion for pure shear stress, expressed in principal stresses, is In the case of principal plane stress, σ 3 = 0 {\displaystyle \sigma _{3}=0} and σ 12 = σ 23 = σ 31 = 0 {\displaystyle \sigma _{12}=\sigma _{23}=\sigma _{31}=0} , the von Mises criterion becomes: This equation represents an ellipse in the plane σ 1 − σ 2 {\displaystyle \sigma _{1}-\sigma _{2}} . Hencky (1924) offered a physical interpretation of von Mises criterion suggesting that yielding begins when the elastic energy of distortion reaches a critical value. [ 6 ] For this reason, the von Mises criterion is also known as the maximum distortion strain energy criterion. This comes from the relation between J 2 {\displaystyle J_{2}} and the elastic strain energy of distortion W D {\displaystyle W_{\text{D}}} : In 1937 [ 9 ] Arpad L. Nadai suggested that yielding begins when the octahedral shear stress reaches a critical value, i.e. the octahedral shear stress of the material at yield in simple tension. In this case, the von Mises yield criterion is also known as the maximum octahedral shear stress criterion in view of the direct proportionality that exists between J 2 {\displaystyle J_{2}} and the octahedral shear stress, τ oct {\displaystyle \tau _{\text{oct}}} , which by definition is thus we have As shown in the equations above, the use of the von Mises criterion as a yield criterion is only exactly applicable when the following material properties are isotropic, and the ratio of the shear yield strength to the tensile yield strength has the following value: [ 10 ] Since no material will have this ratio precisely, in practice it is necessary to use engineering judgement to decide what failure theory is appropriate for a given material. Alternately, for use of the Tresca theory, the same ratio is defined as 1/2. The yield margin of safety is written as
https://en.wikipedia.org/wiki/Von_Mises_yield_criterion
In recreational mathematics , von Neumann's elephant is a problem of constructing a planar curve in the shape of an elephant from only four fixed parameters. It originated from a discussion between physicists John von Neumann and Enrico Fermi and the expression is used in physics to characterize a model with so many parameters that it is overfit , will consequently match any set of experimental data, and therefore is unfalsifiable and unscientific. In a 2004 article in the journal Nature , Freeman Dyson recounts his meeting with Fermi in 1953. Fermi, while discussing a novel theory Dyson was proposing, offered the harsh critique, "There are two ways of doing calculations in theoretical physics. One way ... is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither." Dyson countered by stating that his theory matched Fermi's own data. Fermi asked about a number of arbitrary parameters Dyson used and, upon learning that there were four of them, quoted his friend von Neumann, [ 1 ] "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." By this he meant that the Dyson's simulations relied on too many free parameters , presupposing an overfitting phenomenon. "Stunned" Dyson agreed with the argument, finished the set of articles in order for his students to get their names into the research journals, and switched to another field of study. [ 1 ] The phrase became popular enough to be used in the titles of unrelated research works, like "There is More Than One Way to Model an Elephant. Experiment-Driven Modeling of the Actin Cytoskeleton". [ 2 ] Over time, solving the problem (defining four complex numbers to draw an elephantine shape) became a topic in recreational mathematics . A 1975 attempt through least-squares function approximation required dozens of terms. [ 3 ] An approximation using four parameters was found by three physicists in 2010. [ 4 ] The construction is based on complex Fourier analysis . [ 4 ] The curve found in 2010 is parameterized by: The four fixed parameters used are complex, with affixes z 1 = 50 - 30i , z 2 = 18 + 8i , z 3 = 12 - 10i , z 4 = -14 - 60i . The affix point z 5 = 40 + 20i is added to make the eye of the elephant and this value serves as a parameter for the movement of the "trunk". [ 4 ] Von Neumann also used the word "elephant" as a synonym for linearities and equilibrium points : elephants, equilibria, and linear systems are all equally rare in nature, thus statements about them are nontrivial and the corresponding theories meaningful. Statements and theories about non-elephants in general (non-equilibriums and non-linearities) are inevitably too broad to be of any practical use. [ 5 ]
https://en.wikipedia.org/wiki/Von_Neumann's_elephant
In operator theory , von Neumann's inequality , due to John von Neumann , states that, for a fixed contraction T , the polynomial functional calculus map is itself a contraction. For a contraction T acting on a Hilbert space and a polynomial p , then the norm of p ( T ) is bounded by the supremum of | p ( z )| for z in the unit disk ." [ 1 ] The inequality can be proved by considering the unitary dilation of T , for which the inequality is obvious. This inequality is a specific case of Matsaev's conjecture. That is that for any polynomial P and contraction T on L p {\displaystyle L^{p}} where S is the right-shift operator. The von Neumann inequality proves it true for p = 2 {\displaystyle p=2} and for p = 1 {\displaystyle p=1} and p = ∞ {\displaystyle p=\infty } it is true by straightforward calculation. S.W. Drury has shown in 2011 that the conjecture fails in the general case. [ 2 ]
https://en.wikipedia.org/wiki/Von_Neumann's_inequality
In mathematics , specifically functional analysis , the von Neumann bicommutant theorem relates the closure of a set of bounded operators on a Hilbert space in certain topologies to the bicommutant of that set. In essence, it is a connection between the algebraic and topological sides of operator theory . The formal statement of the theorem is as follows: This algebra is called the von Neumann algebra generated by M . There are several other topologies on the space of bounded operators, and one can ask what are the *-algebras closed in these topologies. If M is closed in the norm topology then it is a C*-algebra , but not necessarily a von Neumann algebra. One such example is the C*-algebra of compact operators (on an infinite dimensional Hilbert space). For most other common topologies the closed *-algebras containing 1 are von Neumann algebras; this applies in particular to the weak operator, strong operator, *-strong operator, ultraweak , ultrastrong , and *-ultrastrong topologies. It is related to the Jacobson density theorem . Let H be a Hilbert space and L ( H ) the bounded operators on H . Consider a self-adjoint unital subalgebra M of L ( H ) (this means that M contains the adjoints of its members, and the identity operator on H ). The theorem is equivalent to the combination of the following three statements: where the W and S subscripts stand for closures in the weak and strong operator topologies, respectively. For any x and y in H , the map T → < Tx , y > is continuous in the weak operator topology, by its definition. Therefore, for any fixed operator O , so is the map Let S be any subset of L ( H ) , and S ′ its commutant . For any operator T in S ′, this function is zero for all O in S . For any T not in S ′, it must be nonzero for some O in S and some x and y in H . By its continuity there is an open neighborhood of T for the weak operator topology on which it is nonzero, and which therefore is also not in S ′. Hence any commutant S ′ is closed in the weak operator topology. In particular, so is M ′′ ; since it contains M , it also contains its weak operator closure. This follows directly from the weak operator topology being coarser than the strong operator topology: for every point x in cl S ( M ) , every open neighborhood of x in the weak operator topology is also open in the strong operator topology and therefore contains a member of M ; therefore x is also a member of cl W ( M ) . Fix X ∈ M ′′ . We must show that X ∈ cl S ( M ) , i.e. for each h ∈ H and any ε > 0 , there exists T in M with || Xh − Th || < ε . Fix h in H . The cyclic subspace M h = { Mh : M ∈ M } is invariant under the action of any T in M . Its closure cl( M h ) in the norm of H is a closed linear subspace, with corresponding orthogonal projection P : H → cl( M h ) in L ( H ). In fact, this P is in M ′ , as we now show. By definition of the bicommutant , we must have XP = PX . Since M is unital, h ∈ M h , and so h = Ph . Hence Xh = XPh = PXh ∈ cl( M h ) . So for each ε > 0 , there exists T in M with || Xh − Th || < ε , i.e. X is in the strong operator closure of M . A C*-algebra M acting on H is said to act non-degenerately if for h in H , M h = {0} implies h = 0 . In this case, it can be shown using an approximate identity in M that the identity operator I lies in the strong closure of M . Therefore, the conclusion of the bicommutant theorem holds for M .
https://en.wikipedia.org/wiki/Von_Neumann_bicommutant_theorem
In mathematics , the von Neumann paradox , named after John von Neumann , is the idea that one can break a planar figure such as the unit square into sets of points and subject each set to an area-preserving affine transformation such that the result is two planar figures of the same size as the original. This was proved in 1929 by John von Neumann , assuming the axiom of choice . It is based on the earlier Banach–Tarski paradox , which is in turn based on the Hausdorff paradox . Banach and Tarski had proved that, using isometric transformations , the result of taking apart and reassembling a two-dimensional figure would necessarily have the same area as the original. This would make creating two unit squares out of one impossible. But von Neumann realized that the trick of such so-called paradoxical decompositions was the use of a group of transformations that include as a subgroup a free group with two generators . The group of area-preserving transformations (whether the special linear group or the special affine group ) contains such subgroups, and this opens the possibility of performing paradoxical decompositions using them. The following is an informal description of the method found by von Neumann. Assume that we have a free group H of area-preserving linear transformations generated by two transformations, σ and τ, which are not far from the identity element. Being a free group means that all its elements can be expressed uniquely in the form σ u 1 τ v 1 σ u 2 τ v 2 ⋯ σ u n τ v n {\displaystyle \sigma ^{u_{1}}\tau ^{v_{1}}\sigma ^{u_{2}}\tau ^{v_{2}}\cdots \sigma ^{u_{n}}\tau ^{v_{n}}} for some n , where the u {\displaystyle u} s and v {\displaystyle v} s are all non-zero integers, except possibly the first u {\displaystyle u} and the last v {\displaystyle v} . We can divide this group into two parts: those that start on the left with σ to some non-zero power (we call this set A ) and those that start with τ to some power (that is, u 1 {\displaystyle u_{1}} is zero—we call this set B , and it includes the identity). If we operate on any point in Euclidean 2-space by the various elements of H we get what is called the orbit of that point. All the points in the plane can thus be classed into orbits, of which there are an infinite number with the cardinality of the continuum . Using the axiom of choice , we can choose one point from each orbit and call the set of these points M . We exclude the origin, which is a fixed point in H . If we then operate on M by all the elements of H , we generate each point of the plane (except the origin) exactly once. If we operate on M by all the elements of A or of B , we get two disjoint sets whose union is all points but the origin. Now we take some figure such as the unit square or the unit disk. We then choose another figure totally inside it, such as a smaller square, centred at the origin. We can cover the big figure with several copies of the small figure, albeit with some points covered by two or more copies. We can then assign each point of the big figure to one of the copies of the small figure. Let us call the sets corresponding to each copy C 1 , C 2 , … , C m {\displaystyle C_{1},C_{2},\dots ,C_{m}} . We shall now make a one-to-one mapping of each point in the big figure to a point in its interior, using only area-preserving transformations. We take the points belonging to C 1 {\displaystyle C_{1}} and translate them so that the centre of the C 1 {\displaystyle C_{1}} square is at the origin. We then take those points in it which are in the set A defined above and operate on them by the area-preserving operation σ τ. This puts them into set B . We then take the points belonging to B and operate on them with σ 2 . They will now still be in B , but the set of these points will be disjoint from the previous set. We proceed in this manner, using σ 3 τ on the A points from C 2 (after centring it) and σ 4 on its B points, and so on. In this way, we have mapped all points from the big figure (except some fixed points) in a one-to-one manner to B type points not too far from the centre, and within the big figure. We can then make a second mapping to A type points. At this point we can apply the method of the Cantor-Bernstein-Schroeder theorem . This theorem tells us that if we have an injection from set D to set E (such as from the big figure to the A type points in it), and an injection from E to D (such as the identity mapping from the A type points in the figure to themselves), then there is a one-to-one correspondence between D and E . In other words, having a mapping from the big figure to a subset of the A points in it, we can make a mapping (a bijection) from the big figure to all the A points in it. (In some regions points are mapped to themselves, in others they are mapped using the mapping described in the previous paragraph.) Likewise we can make a mapping from the big figure to all the B points in it. So looking at this the other way round, we can separate the figure into its A and B points, and then map each of these back into the whole figure (that is, containing both kinds of points)! This sketch glosses over some things, such as how to handle fixed points. It turns out that more mappings and more sets are necessary to work around this. The paradox for the square can be strengthened as follows: This has consequences concerning the problem of measure . As von Neumann notes, To explain this a bit more, the question of whether a finitely additive measure exists, that is preserved under certain transformations, depends on what transformations are allowed. The Banach measure of sets in the plane, which is preserved by translations and rotations, is not preserved by non-isometric transformations even when they do preserve the area of polygons. As explained above, the points of the plane (other than the origin) can be divided into two dense sets which we may call A and B . If the A points of a given polygon are transformed by a certain area-preserving transformation and the B points by another, both sets can become subsets of the B points in two new polygons. The new polygons have the same area as the old polygon, but the two transformed sets cannot have the same measure as before (since they contain only part of the B points), and therefore there is no measure that "works". The class of groups isolated by von Neumann in the course of study of Banach–Tarski phenomenon turned out to be very important for many areas of mathematics: these are amenable groups , or groups with an invariant mean, and include all finite and all solvable groups . Generally speaking, paradoxical decompositions arise when the group used for equivalences in the definition of equidecomposability is not amenable. Von Neumann's paper left open the possibility of a paradoxical decomposition of the interior of the unit square with respect to the linear group SL (2, R ) (Wagon, Question 7.4). In 2000, Miklós Laczkovich proved that such a decomposition exists. [ 2 ] More precisely, let A be the family of all bounded subsets of the plane with non-empty interior and at a positive distance from the origin, and B the family of all planar sets with the property that a union of finitely many translates under some elements of SL (2, R ) contains a punctured neighbourhood of the origin. Then all sets in the family A are SL (2, R )-equidecomposable, and likewise for the sets in B . It follows that both families consist of paradoxical sets.
https://en.wikipedia.org/wiki/Von_Neumann_paradox
In the foundations of mathematics , von Neumann–Bernays–Gödel set theory ( NBG ) is an axiomatic set theory that is a conservative extension of Zermelo–Fraenkel–choice set theory (ZFC). NBG introduces the notion of class , which is a collection of sets defined by a formula whose quantifiers range only over sets. NBG can define classes that are larger than sets, such as the class of all sets and the class of all ordinals . Morse–Kelley set theory (MK) allows classes to be defined by formulas whose quantifiers range over classes. NBG is finitely axiomatizable, while ZFC and MK are not. A key theorem of NBG is the class existence theorem, which states that for every formula whose quantifiers range only over sets, there is a class consisting of the sets satisfying the formula. This class is built by mirroring the step-by-step construction of the formula with classes. Since all set-theoretic formulas are constructed from two kinds of atomic formulas ( membership and equality ) and finitely many logical symbols , only finitely many axioms are needed to build the classes satisfying them. This is why NBG is finitely axiomatizable. Classes are also used for other constructions, for handling the set-theoretic paradoxes , and for stating the axiom of global choice , which is stronger than ZFC's axiom of choice . John von Neumann introduced classes into set theory in 1925. The primitive notions of his theory were function and argument . Using these notions, he defined class and set. [ 1 ] Paul Bernays reformulated von Neumann's theory by taking class and set as primitive notions. [ 2 ] Kurt Gödel simplified Bernays' theory for his relative consistency proof of the axiom of choice and the generalized continuum hypothesis . [ 3 ] Classes have several uses in NBG: Once classes are added to the language of ZFC, it is easy to transform ZFC into a set theory with classes. First, the axiom schema of class comprehension is added. This axiom schema states: For every formula ϕ ( x 1 , … , x n ) {\displaystyle \phi (x_{1},\ldots ,x_{n})} that quantifies only over sets, there exists a class A {\displaystyle A} consisting of the n {\displaystyle n} - tuples satisfying the formula—that is, ∀ x 1 ⋯ ∀ x n [ ( x 1 , … , x n ) ∈ A ⟺ ϕ ( x 1 , … , x n ) ] . {\displaystyle \forall x_{1}\cdots \,\forall x_{n}[(x_{1},\ldots ,x_{n})\in A\iff \phi (x_{1},\ldots ,x_{n})].} Then the axiom schema of replacement is replaced by a single axiom that uses a class. Finally, ZFC's axiom of extensionality is modified to handle classes: If two classes have the same elements, then they are identical. The other axioms of ZFC are not modified. [ 8 ] This theory is not finitely axiomatized. ZFC's replacement schema has been replaced by a single axiom, but the axiom schema of class comprehension has been introduced. To produce a theory with finitely many axioms, the axiom schema of class comprehension is first replaced with finitely many class existence axioms . Then these axioms are used to prove the class existence theorem, which implies every instance of the axiom schema. [ 8 ] The proof of this theorem requires only seven class existence axioms, which are used to convert the construction of a formula into the construction of a class satisfying the formula. NBG has two types of objects: classes and sets. Intuitively, every set is also a class. There are two ways to axiomatize this. [ non-primary source needed ] Bernays used many-sorted logic with two sorts: classes and sets. [ 2 ] Gödel avoided sorts by introducing primitive predicates: C l s ( A ) {\displaystyle {\mathfrak {Cls}}(A)} for " A {\displaystyle A} is a class" and M ( A ) {\displaystyle {\mathfrak {M}}(A)} for " A {\displaystyle A} is a set" (in German, "set" is Menge ). He also introduced axioms stating that every set is a class and that if class A {\displaystyle A} is a member of a class, then A {\displaystyle A} is a set. [ 9 ] Using predicates is the standard way to eliminate sorts. Elliott Mendelson modified Gödel's approach by having everything be a class and defining the set predicate M ( A ) {\displaystyle M(A)} as ∃ C ( A ∈ C ) . {\displaystyle \exists C(A\in C).} [ 10 ] This modification eliminates Gödel's class predicate and his two axioms. Bernays' two-sorted approach may appear more natural at first, but it creates a more complex theory. [ b ] In Bernays' theory, every set has two representations: one as a set and the other as a class. Also, there are two membership relations : the first, denoted by "∈", is between two sets; the second, denoted by "η", is between a set and a class. [ 2 ] This redundancy is required by many-sorted logic because variables of different sorts range over disjoint subdomains of the domain of discourse . The differences between these two approaches do not affect what can be proved, but they do affect how statements are written. In Gödel's approach, A ∈ C {\displaystyle A\in C} where A {\displaystyle A} and C {\displaystyle C} are classes is a valid statement. In Bernays' approach this statement has no meaning. However, if A {\displaystyle A} is a set, there is an equivalent statement: Define "set a {\displaystyle a} represents class A {\displaystyle A} " if they have the same sets as members—that is, ∀ x ( x ∈ a ⟺ x η A ) . {\displaystyle \forall x(x\in a\iff x\;\eta \;A).} The statement a η C {\displaystyle a\;\eta \;C} where set a {\displaystyle a} represents class A {\displaystyle A} is equivalent to Gödel's A ∈ C . {\displaystyle A\in C.} [ 2 ] The approach adopted in this article is that of Gödel with Mendelson's modification. This means that NBG is an axiomatic system in first-order predicate logic with equality , and its only primitive notions are class and the membership relation. A set is a class that belongs to at least one class: A {\displaystyle A} is a set if and only if ∃ C ( A ∈ C ) {\displaystyle \exists C(A\in C)} . A class that is not a set is called a proper class: A {\displaystyle A} is a proper class if and only if ∀ C ( A ∉ C ) {\displaystyle \forall C(A\notin C)} . [ 12 ] Therefore, every class is either a set or a proper class, and no class is both. Gödel introduced the convention that uppercase variables range over classes, while lowercase variables range over sets. [ 9 ] Gödel also used names that begin with an uppercase letter to denote particular classes, including functions and relations defined on the class of all sets. Gödel's convention is used in this article. It allows us to write: The following axioms and definitions are needed for the proof of the class existence theorem. Axiom of extensionality. If two classes have the same elements, then they are identical. This axiom generalizes ZFC's axiom of extensionality to classes. Axiom of pairing . If x {\displaystyle x} and y {\displaystyle y} are sets, then there exists a set p {\displaystyle p} whose only members are x {\displaystyle x} and y {\displaystyle y} . As in ZFC, the axiom of extensionality implies the uniqueness of the set p {\displaystyle p} , which allows us to introduce the notation { x , y } . {\displaystyle \{x,y\}.} Ordered pairs are defined by: Tuples are defined inductively using ordered pairs: Class existence axioms will be used to prove the class existence theorem: For every formula in n {\displaystyle n} free set variables that quantifies only over sets, there exists a class of n {\displaystyle n} -tuples that satisfy it. The following example starts with two classes that are functions and builds a composite function . This example illustrates the techniques that are needed to prove the class existence theorem, which lead to the class existence axioms that are needed. Because this formula is built from simpler formulas using conjunction ∧ {\displaystyle \land } and existential quantification ∃ {\displaystyle \exists } , class operations are needed that take classes representing the simpler formulas and produce classes representing the formulas with ∧ {\displaystyle \land } and ∃ {\displaystyle \exists } . To produce a class representing a formula with ∧ {\displaystyle \land } , intersection used since x ∈ A ∩ B ⟺ x ∈ A ∧ x ∈ B . {\displaystyle x\in A\cap B\iff x\in A\land x\in B.} To produce a class representing a formula with ∃ {\displaystyle \exists } , the domain is used since x ∈ D o m ( A ) ⟺ ∃ t [ ( x , t ) ∈ A ] . {\displaystyle x\in Dom(A)\iff \exists t[(x,t)\in A].} Before taking the intersection, the tuples in F {\displaystyle F} and G {\displaystyle G} need an extra component so they have the same variables. The component y {\displaystyle y} is added to the tuples of F {\displaystyle F} and x {\displaystyle x} is added to the tuples of G {\displaystyle G} : F ′ = { ( x , t , y ) : ( x , t ) ∈ F } {\displaystyle F'=\{(x,t,y):(x,t)\in F\}\,} and G ′ = { ( t , y , x ) : ( t , y ) ∈ G } {\displaystyle \,G'=\{(t,y,x):(t,y)\in G\}} In the definition of F ′ , {\displaystyle F',} the variable y {\displaystyle y} is not restricted by the statement ( x , t ) ∈ F , {\displaystyle (x,t)\in F,} so y {\displaystyle y} ranges over the class V {\displaystyle V} of all sets. Similarly, in the definition of G ′ , {\displaystyle G',} the variable x {\displaystyle x} ranges over V . {\displaystyle V.} So an axiom is needed that adds an extra component (whose values range over V {\displaystyle V} ) to the tuples of a given class. Next, the variables are put in the same order to prepare for the intersection: F ″ = { ( x , y , t ) : ( x , t ) ∈ F } {\displaystyle F''=\{(x,y,t):(x,t)\in F\}\,} and G ″ = { ( x , y , t ) : ( t , y ) ∈ G } {\displaystyle \,G''=\{(x,y,t):(t,y)\in G\}} To go from F ′ {\displaystyle F'} to F ″ {\displaystyle F''} and from G ′ {\displaystyle G'} to G ″ {\displaystyle G''} requires two different permutations , so axioms that support permutations of tuple components are needed. The intersection of F ″ {\displaystyle F''} and G ″ {\displaystyle G''} handles ∧ {\displaystyle \land } : F ″ ∩ G ″ = { ( x , y , t ) : ( x , t ) ∈ F ∧ ( t , y ) ∈ G } {\displaystyle F''\cap G''=\{(x,y,t):(x,t)\in F\,\land \,(t,y)\in G\}} Since ( x , y , t ) {\displaystyle (x,y,t)} is defined as ( ( x , y ) , t ) {\displaystyle ((x,y),t)} , taking the domain of F ″ ∩ G ″ {\displaystyle F''\cap G''} handles ∃ t {\displaystyle \exists t} and produces the composite function: G ∘ F = D o m ( F ″ ∩ G ″ ) = { ( x , y ) : ∃ t ( ( x , t ) ∈ F ∧ ( t , y ) ∈ G ) } {\displaystyle G\circ F=Dom(F''\cap G'')=\{(x,y):\exists t((x,t)\in F\,\land \,(t,y)\in G)\}} So axioms of intersection and domain are needed. The class existence axioms are divided into two groups: axioms handling language primitives and axioms handling tuples. There are four axioms in the first group and three axioms in the second group. [ d ] Axioms for handling language primitives: Membership. There exists a class E {\displaystyle E} containing all the ordered pairs whose first component is a member of the second component. Intersection (conjunction). For any two classes A {\displaystyle A} and B {\displaystyle B} , there is a class C {\displaystyle C} consisting precisely of the sets that belong to both A {\displaystyle A} and B {\displaystyle B} . Complement (negation). For any class A {\displaystyle A} , there is a class B {\displaystyle B} consisting precisely of the sets not belonging to A {\displaystyle A} . Domain (existential quantifier). For any class A {\displaystyle A} , there is a class B {\displaystyle B} consisting precisely of the first components of the ordered pairs of A {\displaystyle A} . By the axiom of extensionality, class C {\displaystyle C} in the intersection axiom and class B {\displaystyle B} in the complement and domain axioms are unique. They will be denoted by: A ∩ B , {\displaystyle A\cap B,} ∁ A , {\displaystyle \complement A,} and D o m ( A ) , {\displaystyle Dom(A),} respectively. [ e ] The first three axioms imply the existence of the empty class and the class of all sets: The membership axiom implies the existence of a class E . {\displaystyle E.} The intersection and complement axioms imply the existence of E ∩ ∁ E {\displaystyle E\cap \complement E} , which is empty. By the axiom of extensionality, this class is unique; it is denoted by ∅ . {\displaystyle \emptyset .} The complement of ∅ {\displaystyle \emptyset } is the class V {\displaystyle V} of all sets, which is also unique by extensionality. The set predicate M ( A ) {\displaystyle M(A)} , which was defined as ∃ C ( A ∈ C ) {\displaystyle \exists C(A\in C)} , is now redefined as A ∈ V {\displaystyle A\in V} to avoid quantifying over classes. Axioms for handling tuples: Product by V {\displaystyle V} . For any class A {\displaystyle A} , there is a class B {\displaystyle B} consisting of the ordered pairs whose first component belongs to A {\displaystyle A} . Circular permutation . For any class A {\displaystyle A} , there is a class B {\displaystyle B} whose 3‑tuples are obtained by applying the circular permutation ( y , z , x ) ↦ ( x , y , z ) {\displaystyle (y,z,x)\mapsto (x,y,z)} to the 3‑tuples of A {\displaystyle A} . Transposition . For any class A {\displaystyle A} , there is a class B {\displaystyle B} whose 3‑tuples are obtained by transposing the last two components of the 3‑tuples of A {\displaystyle A} . By extensionality, the product by V {\displaystyle V} axiom implies the existence of a unique class, which is denoted by A × V . {\displaystyle A\times V.} This axiom is used to define the class V n {\displaystyle V^{n}} of all n {\displaystyle n} -tuples : V 1 = V {\displaystyle V^{1}=V} and V n + 1 = V n × V . {\displaystyle V^{n+1}=V^{n}\times V.\,} If A {\displaystyle A} is a class, extensionality implies that A ∩ V n {\displaystyle A\cap V^{n}} is the unique class consisting of the n {\displaystyle n} -tuples of A . {\displaystyle A.} For example, the membership axiom produces a class E {\displaystyle E} that may contain elements that are not ordered pairs, while the intersection E ∩ V 2 {\displaystyle E\cap V^{2}} contains only the ordered pairs of E {\displaystyle E} . The circular permutation and transposition axioms do not imply the existence of unique classes because they specify only the 3‑tuples of class B . {\displaystyle B.} By specifying the 3‑tuples, these axioms also specify the n {\displaystyle n} -tuples for n ≥ 4 {\displaystyle n\geq 4} since: ( x 1 , … , x n − 2 , x n − 1 , x n ) = ( ( x 1 , … , x n − 2 ) , x n − 1 , x n ) . {\displaystyle (x_{1},\ldots ,x_{n-2},x_{n-1},x_{n})=((x_{1},\ldots ,x_{n-2}),x_{n-1},x_{n}).} The axioms for handling tuples and the domain axiom imply the following lemma, which is used in the proof of the class existence theorem. Tuple lemma — One more axiom is needed to prove the class existence theorem: the axiom of regularity . Since the existence of the empty class has been proved, the usual statement of this axiom is given. [ f ] Axiom of regularity . Every nonempty set has at least one element with which it has no element in common. ∀ a [ a ≠ ∅ ⟹ ∃ u ( u ∈ a ∧ u ∩ a = ∅ ) ] . {\displaystyle \forall a\,[a\neq \emptyset \implies \exists u(u\in a\land u\cap a=\emptyset )].} This axiom implies that a set cannot belong to itself: Assume that x ∈ x {\displaystyle x\in x} and let a = { x } . {\displaystyle a=\{x\}.} Then x ∩ a ≠ ∅ {\displaystyle x\cap a\neq \emptyset } since x ∈ x ∩ a . {\displaystyle x\in x\cap a.} This contradicts the axiom of regularity because x {\displaystyle x} is the only element in a . {\displaystyle a.} Therefore, x ∉ x . {\displaystyle x\notin x.} The axiom of regularity also prohibits infinite descending membership sequences of sets: ⋯ ∈ x n + 1 ∈ x n ∈ ⋯ ∈ x 1 ∈ x 0 . {\displaystyle \cdots \in x_{n+1}\in x_{n}\in \cdots \in x_{1}\in x_{0}.} Gödel stated regularity for classes rather than for sets in his 1940 monograph, which was based on lectures given in 1938. [ 26 ] In 1939, he proved that regularity for sets implies regularity for classes. [ 27 ] Class existence theorem — Let ϕ ( x 1 , … , x n , Y 1 , … , Y m ) {\displaystyle \phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})} be a formula that quantifies only over sets and contains no free variables other than x 1 , … , x n , Y 1 , … , Y m {\displaystyle x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m}} (not necessarily all of these). Then for all Y 1 , … , Y m {\displaystyle Y_{1},\dots ,Y_{m}} , there exists a unique class A {\displaystyle A} of n {\displaystyle n} -tuples such that: ∀ x 1 ⋯ ∀ x n [ ( x 1 , … , x n ) ∈ A ⟺ ϕ ( x 1 , … , x n , Y 1 , … , Y m ) ] . {\displaystyle \forall x_{1}\cdots \,\forall x_{n}[(x_{1},\dots ,x_{n})\in A\iff \phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})].} The class A {\displaystyle A} is denoted by { ( x 1 , … , x n ) : ϕ ( x 1 , … , x n , Y 1 , … , Y m ) } . {\displaystyle \{(x_{1},\dots ,x_{n}):\phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})\}.} [ g ] The theorem's proof will be done in two steps: Transformation rules. In rules 1 and 2 below, Δ {\displaystyle \Delta } and Γ {\displaystyle \Gamma } denote set or class variables. These two rules eliminate all occurrences of class variables before an ∈ {\displaystyle \in } and all occurrences of equality. Each time rule 1 or 2 is applied to a subformula, i {\displaystyle i} is chosen so that z i {\displaystyle z_{i}} differs from the other variables in the current formula. The three rules are repeated until there are no subformulas to which they can be applied. This produces a formula that is built only with ¬ {\displaystyle \neg } , ∧ {\displaystyle \land } , ∃ {\displaystyle \exists } , ∈ {\displaystyle \in } , set variables, and class variables Y k {\displaystyle Y_{k}} where Y k {\displaystyle Y_{k}} does not appear before an ∈ {\displaystyle \in } . Transformation rules: bound variables . Consider the composite function formula of example 1 with its free set variables replaced by x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} : ∃ t [ ( x 1 , t ) ∈ F ∧ ( t , x 2 ) ∈ G ] . {\displaystyle \exists t[(x_{1},t)\in F\,\land \,(t,x_{2})\in G].} The inductive proof will remove ∃ t {\displaystyle \exists t} , which produces the formula ( x 1 , t ) ∈ F ∧ ( t , x 2 ) ∈ G . {\displaystyle (x_{1},t)\in F\land (t,x_{2})\in G.} However, since the class existence theorem is stated for subscripted variables, this formula does not have the form expected by the induction hypothesis . This problem is solved by replacing the variable t {\displaystyle t} with x 3 . {\displaystyle x_{3}.} Bound variables within nested quantifiers are handled by increasing the subscript by one for each successive quantifier. This leads to rule 4, which must be applied after the other rules since rules 1 and 2 produce quantified variables. ϕ ( x 1 ) = ∃ u [ u ∈ x 1 ∧ ¬ ∃ v ( v ∈ u ) ] ∧ ∃ w ( w ∈ x 1 ∧ ∃ y [ ( y ∈ w ∧ ¬ ∃ z ( z ∈ y ) ] ) ϕ r ( x 1 ) = ∃ x 2 [ x 2 ∈ x 1 ∧ ¬ ∃ x 3 ( x 3 ∈ x 2 ) ] ∧ ∃ x 2 ( x 2 ∈ x 1 ∧ ∃ x 3 [ ( x 3 ∈ x 2 ∧ ¬ ∃ x 4 ( x 4 ∈ x 3 ) ] ) {\displaystyle {\begin{aligned}\phi (x_{1})\,&=\,\exists u\;\,[\,u\in x_{1}\,\land \,\neg \exists v\;\,(\;v\,\in \,u\,)]\,\land \,\,\exists w\;{\bigl (}w\in x_{1}\,\land \,\exists y\;\,[(\;y\,\in w\;\land \;\neg \exists z\;\,(\;z\,\in \,y\,)]{\bigr )}\\\phi _{r}(x_{1})\,&=\,\exists x_{2}[x_{2}\!\in \!x_{1}\,\land \,\neg \exists x_{3}(x_{3}\!\in \!x_{2})]\,\land \,\,\exists x_{2}{\bigl (}x_{2}\!\in \!x_{1}\,\land \,\exists x_{3}[(x_{3}\!\in \!x_{2}\,\land \,\neg \exists x_{4}(x_{4}\!\in \!x_{3})]{\bigr )}\end{aligned}}} Since x 1 {\displaystyle x_{1}} is the only free variable, n = 1. {\displaystyle n=1.} The quantified variable x 3 {\displaystyle x_{3}} appears twice in x 3 ∈ x 2 {\displaystyle x_{3}\in x_{2}} at nesting depth 2. Its subscript is 3 because n + q = 1 + 2 = 3. {\displaystyle n+q=1+2=3.} If two quantifier scopes are at the same nesting depth, they are either identical or disjoint. The two occurrences of x 3 {\displaystyle x_{3}} are in disjoint quantifier scopes, so they do not interact with each other. Proof of the class existence theorem. The proof starts by applying the transformation rules to the given formula to produce a transformed formula. Since this formula is equivalent to the given formula, the proof is completed by proving the class existence theorem for transformed formulas. The following lemma is used in the proof. Expansion lemma — Let 1 ≤ i < j ≤ n , {\displaystyle 1\leq i<j\leq n,} and let P {\displaystyle P} be a class containing all the ordered pairs ( x i , x j ) {\displaystyle (x_{i},x_{j})} satisfying R ( x i , x j ) . {\displaystyle R(x_{i},x_{j}).} That is, P ⊇ { ( x i , x j ) : R ( x i , x j ) } . {\displaystyle P\supseteq \{(x_{i},x_{j}):R(x_{i},x_{j})\}.} Then P {\displaystyle P} can be expanded into the unique class Q {\displaystyle Q} of n {\displaystyle n} -tuples satisfying R ( x i , x j ) {\displaystyle R(x_{i},x_{j})} . That is, Q = { ( x 1 , … , x n ) : R ( x i , x j ) } . {\displaystyle Q=\{(x_{1},\ldots ,x_{n}):R(x_{i},x_{j})\}.} Proof: Class existence theorem for transformed formulas — Let ϕ ( x 1 , … , x n , Y 1 , … , Y m ) {\displaystyle \phi (x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m})} be a formula that: Then for all Y 1 , … , Y m {\displaystyle Y_{1},\dots ,Y_{m}} , there exists a unique class A {\displaystyle A} of n {\displaystyle n} -tuples such that: ∀ x 1 ⋯ ∀ x n [ ( x 1 , … , x n ) ∈ A ⟺ ϕ ( x 1 , … , x n , Y 1 , … , Y m ) ] . {\displaystyle \forall x_{1}\cdots \,\forall x_{n}[(x_{1},\ldots ,x_{n})\in A\iff \phi (x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m})].} Proof: Basis step: ϕ {\displaystyle \phi } has 0 logical symbols. The theorem's hypothesis implies that ϕ {\displaystyle \phi } is an atomic formula of the form x i ∈ x j {\displaystyle x_{i}\in x_{j}} or x i ∈ Y k . {\displaystyle x_{i}\in Y_{k}.} Case 1: If ϕ {\displaystyle \phi } is x i ∈ x j {\displaystyle x_{i}\in x_{j}} , we build the class E i , j , n = { ( x 1 , … , x n ) : x i ∈ x j } , {\displaystyle E_{i,j,n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in x_{j}\},} the unique class of n {\displaystyle n} -tuples satisfying x i ∈ x j . {\displaystyle x_{i}\in x_{j}.} Case a: ϕ {\displaystyle \phi } is x i ∈ x j {\displaystyle x_{i}\in x_{j}} where i < j . {\displaystyle i<j.} The axiom of membership produces a class P {\displaystyle P} containing all the ordered pairs ( x i , x j ) {\displaystyle (x_{i},x_{j})} satisfying x i ∈ x j . {\displaystyle x_{i}\in x_{j}.} Apply the expansion lemma to P {\displaystyle P} to obtain E i , j , n = { ( x 1 , … , x n ) : x i ∈ x j } . {\displaystyle E_{i,j,n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in x_{j}\}.} Case b: ϕ {\displaystyle \phi } is x i ∈ x j {\displaystyle x_{i}\in x_{j}} where i > j . {\displaystyle i>j.} The axiom of membership produces a class P {\displaystyle P} containing all the ordered pairs ( x i , x j ) {\displaystyle (x_{i},x_{j})} satisfying x i ∈ x j . {\displaystyle x_{i}\in x_{j}.} Apply the tuple lemma's statement 4 to P {\displaystyle P} to obtain P ′ {\displaystyle P'} containing all the ordered pairs ( x j , x i ) {\displaystyle (x_{j},x_{i})} satisfying x i ∈ x j . {\displaystyle x_{i}\in x_{j}.} Apply the expansion lemma to P ′ {\displaystyle P'} to obtain E i , j , n = { ( x 1 , … , x n ) : x i ∈ x j } . {\displaystyle E_{i,j,n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in x_{j}\}.} Case c: ϕ {\displaystyle \phi } is x i ∈ x j {\displaystyle x_{i}\in x_{j}} where i = j . {\displaystyle i=j.} Since this formula is false by the axiom of regularity , no n {\displaystyle n} -tuples satisfy it, so E i , j , n = ∅ . {\displaystyle E_{i,j,n}=\emptyset .} Case 2: If ϕ {\displaystyle \phi } is x i ∈ Y k {\displaystyle x_{i}\in Y_{k}} , we build the class E i , Y k , n = { ( x 1 , … , x n ) : x i ∈ Y k } , {\displaystyle E_{i,Y_{k},n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in Y_{k}\},} the unique class of n {\displaystyle n} -tuples satisfying x i ∈ Y k . {\displaystyle x_{i}\in Y_{k}.} Case a: ϕ {\displaystyle \phi } is x i ∈ Y k {\displaystyle x_{i}\in Y_{k}} where i < n . {\displaystyle i<n.} Apply the axiom of product by V {\displaystyle V} to Y k {\displaystyle Y_{k}} to produce the class P = Y k × V = { ( x i , x i + 1 ) : x i ∈ Y k } . {\displaystyle P=Y_{k}\times V=\{(x_{i},x_{i+1}):x_{i}\in Y_{k}\}.} Apply the expansion lemma to P {\displaystyle P} to obtain E i , Y k , n = { ( x 1 , … , x n ) : x i ∈ Y k } . {\displaystyle E_{i,Y_{k},n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in Y_{k}\}.} Case b: ϕ {\displaystyle \phi } is x i ∈ Y k {\displaystyle x_{i}\in Y_{k}} where i = n > 1. {\displaystyle i=n>1.} Apply the axiom of product by V {\displaystyle V} to Y k {\displaystyle Y_{k}} to produce the class P = Y k × V = { ( x i , x i − 1 ) : x i ∈ Y k } . {\displaystyle P=Y_{k}\times V=\{(x_{i},x_{i-1}):x_{i}\in Y_{k}\}.} Apply the tuple lemma's statement 4 to P {\displaystyle P} to obtain P ′ = V × Y k = { ( x i − 1 , x i ) : x i ∈ Y k } . {\displaystyle P'=V\times Y_{k}=\{(x_{i-1},x_{i}):x_{i}\in Y_{k}\}.} Apply the expansion lemma to P ′ {\displaystyle P'} to obtain E i , Y k , n = { ( x 1 , … , x n ) : x i ∈ Y k } . {\displaystyle E_{i,Y_{k},n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in Y_{k}\}.} Case c: ϕ {\displaystyle \phi } is x i ∈ Y k {\displaystyle x_{i}\in Y_{k}} where i = n = 1. {\displaystyle i=n=1.} Then E i , Y k , n = Y k . {\displaystyle E_{i,Y_{k},n}=Y_{k}.} Inductive step: ϕ {\displaystyle \phi } has k {\displaystyle k} logical symbols where k > 0 {\displaystyle k>0} . Assume the induction hypothesis that the theorem is true for all ψ {\displaystyle \psi } with less than k {\displaystyle k} logical symbols. We now prove the theorem for ϕ {\displaystyle \phi } with k {\displaystyle k} logical symbols. In this proof, the list of class variables Y 1 , … , Y m {\displaystyle Y_{1},\dots ,Y_{m}} is abbreviated by Y → {\displaystyle {\vec {Y}}} , so a formula—such as ϕ ( x 1 , … , x n , Y 1 , … , Y m ) {\displaystyle \phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})} —can be written as ϕ ( x 1 , … , x n , Y → ) . {\displaystyle \phi (x_{1},\dots ,x_{n},{\vec {Y}}).} Case 1: ϕ ( x 1 , … , x n , Y → ) = ¬ ψ ( x 1 , … , x n , Y → ) . {\displaystyle \phi (x_{1},\ldots ,x_{n},{\vec {Y}})=\neg \psi (x_{1},\ldots ,x_{n},{\vec {Y}}).} Since ψ {\displaystyle \psi } has k − 1 {\displaystyle k-1} logical symbols, the induction hypothesis implies that there is a unique class A {\displaystyle A} of n {\displaystyle n} -tuples such that: ( x 1 , … , x n ) ∈ A ⟺ ψ ( x 1 , … , x n , Y → ) . {\displaystyle \quad (x_{1},\ldots ,x_{n})\in A\iff \psi (x_{1},\ldots ,x_{n},{\vec {Y}}).} By the complement axiom, there is a class ∁ A {\displaystyle \complement A} such that ∀ u [ u ∈ ∁ A ⟺ ¬ ( u ∈ A ) ] . {\displaystyle \forall u\,[u\in \complement A\iff \neg (u\in A)].} However, ∁ A {\displaystyle \complement A} contains elements other than n {\displaystyle n} -tuples if n > 1. {\displaystyle n>1.} To eliminate these elements, use ∁ V n A = {\displaystyle \complement _{V^{n}}A=\,} ∁ A ∩ V n = {\displaystyle \complement A\cap V^{n}=\,} V n ∖ A , {\displaystyle V^{n}\setminus A,} which is the complement relative to the class V n {\displaystyle V^{n}} of all n {\displaystyle n} -tuples. [ e ] Then, by extensionality, ∁ V n A {\displaystyle \complement _{V^{n}}A} is the unique class of n {\displaystyle n} -tuples such that: ( x 1 , … , x n ) ∈ ∁ V n A ⟺ ¬ [ ( x 1 , … , x n ) ∈ A ] ⟺ ¬ ψ ( x 1 , … , x n , Y → ) ⟺ ϕ ( x 1 , … , x n , Y → ) . {\displaystyle {\begin{alignedat}{2}\quad &(x_{1},\ldots ,x_{n})\in \complement _{V^{n}}A&&\iff \neg [(x_{1},\ldots ,x_{n})\in A]\\&&&\iff \neg \psi (x_{1},\ldots ,x_{n},{\vec {Y}})\\&&&\iff \phi (x_{1},\ldots ,x_{n},{\vec {Y}}).\end{alignedat}}} Case 2: ϕ ( x 1 , … , x n , Y → ) = ψ 1 ( x 1 , … , x n , Y → ) ∧ ψ 2 ( x 1 , … , x n , Y → ) . {\displaystyle \phi (x_{1},\ldots ,x_{n},{\vec {Y}})=\psi _{1}(x_{1},\ldots ,x_{n},{\vec {Y}})\land \psi _{2}(x_{1},\ldots ,x_{n},{\vec {Y}}).} Since both ψ 1 {\displaystyle \psi _{1}} and ψ 2 {\displaystyle \psi _{2}} have less than k {\displaystyle k} logical symbols, the induction hypothesis implies that there are unique classes of n {\displaystyle n} -tuples, A 1 {\displaystyle A_{1}} and A 2 {\displaystyle A_{2}} , such that: By the axioms of intersection and extensionality, A 1 ∩ A 2 {\displaystyle A_{1}\cap A_{2}} is the unique class of n {\displaystyle n} -tuples such that: ( x 1 , … , x n ) ∈ A 1 ∩ A 2 ⟺ ( x 1 , … , x n ) ∈ A 1 ∧ ( x 1 , … , x n ) ∈ A 2 ⟺ ψ 1 ( x 1 , … , x n , Y → ) ∧ ψ 2 ( x 1 , … , x n , Y → ) ⟺ ϕ ( x 1 , … , x n , Y → ) . {\displaystyle {\begin{alignedat}{2}\quad &(x_{1},\ldots ,x_{n})\in A_{1}\cap A_{2}&&\iff (x_{1},\ldots ,x_{n})\in A_{1}\land (x_{1},\ldots ,x_{n})\in A_{2}\\&&&\iff \psi _{1}(x_{1},\ldots ,x_{n},{\vec {Y}})\land \psi _{2}(x_{1},\ldots ,x_{n},{\vec {Y}})\\&&&\iff \phi (x_{1},\ldots ,x_{n},{\vec {Y}}).\end{alignedat}}} Case 3: ϕ ( x 1 , … , x n , Y → ) = ∃ x n + 1 ψ ( x 1 , … , x n , x n + 1 , Y → ) . {\displaystyle \phi (x_{1},\ldots ,x_{n},{\vec {Y}})=\exists x_{n+1}\psi (x_{1},\ldots ,x_{n},x_{n+1},{\vec {Y}}).} The quantifier nesting depth of ψ {\displaystyle \psi } is one more than that of ϕ {\displaystyle \phi } and the additional free variable is x n + 1 . {\displaystyle x_{n+1}.} Since ψ {\displaystyle \psi } has k − 1 {\displaystyle k-1} logical symbols, the induction hypothesis implies that there is a unique class A {\displaystyle A} of ( n + 1 ) {\displaystyle (n+1)} -tuples such that: ( x 1 , … , x n , x n + 1 ) ∈ A ⟺ ψ ( x 1 , … , x n , x n + 1 , Y → ) . {\displaystyle \quad (x_{1},\ldots ,x_{n},x_{n+1})\in A\iff \psi (x_{1},\ldots ,x_{n},x_{n+1},{\vec {Y}}).} By the axioms of domain and extensionality, D o m ( A ) {\displaystyle Dom(A)} is the unique class of n {\displaystyle n} -tuples such that: [ h ] ( x 1 , … , x n ) ∈ D o m ( A ) ⟺ ∃ x n + 1 [ ( ( x 1 , … , x n ) , x n + 1 ) ∈ A ] ⟺ ∃ x n + 1 [ ( x 1 , … , x n , x n + 1 ) ∈ A ] ⟺ ∃ x n + 1 ψ ( x 1 , … , x n , x n + 1 , Y → ) ⟺ ϕ ( x 1 , … , x n , Y → ) . {\displaystyle {\begin{alignedat}{2}\quad &(x_{1},\ldots ,x_{n})\in Dom(A)&&\iff \exists x_{n+1}[((x_{1},\ldots ,x_{n}),x_{n+1})\in A]\\&&&\iff \exists x_{n+1}[(x_{1},\ldots ,x_{n},x_{n+1})\in A]\\&&&\iff \exists x_{n+1}\,\psi (x_{1},\ldots ,x_{n},x_{n+1},{\vec {Y}})\\&&&\iff \phi (x_{1},\ldots ,x_{n},{\vec {Y}}).\end{alignedat}}} Gödel pointed out that the class existence theorem "is a metatheorem , that is, a theorem about the system [NBG], not in the system …" [ 30 ] It is a theorem about NBG because it is proved in the metatheory by induction on NBG formulas. Also, its proof—instead of invoking finitely many NBG axioms—inductively describes how to use NBG axioms to construct a class satisfying a given formula. For every formula, this description can be turned into a constructive existence proof that is in NBG. Therefore, this metatheorem can generate the NBG proofs that replace uses of NBG's class existence theorem. A recursive computer program succinctly captures the construction of a class from a given formula. The definition of this program does not depend on the proof of the class existence theorem. However, the proof is needed to prove that the class constructed by the program satisfies the given formula and is built using the axioms. This program is written in pseudocode that uses a Pascal -style case statement . [ i ] f u n c t i o n Class ( ϕ , n ) i n p u t : ϕ is a transformed formula of the form ϕ ( x 1 , … , x n , Y 1 , … , Y m ) ; n specifies that a class of n -tuples is returned. o u t p u t : class A of n -tuples satisfying ∀ x 1 ⋯ ∀ x n [ ( x 1 , … , x n ) ∈ A ⟺ ϕ ( x 1 , … , x n , Y 1 , … , Y m ) ] . b e g i n c a s e ϕ o f x i ∈ x j : r e t u r n E i , j , n ; // E i , j , n = { ( x 1 , … , x n ) : x i ∈ x j } x i ∈ Y k : r e t u r n E i , Y k , n ; // E i , Y k , n = { ( x 1 , … , x n ) : x i ∈ Y k } ¬ ψ : r e t u r n ∁ V n Class ( ψ , n ) ; // ∁ V n Class ( ψ , n ) = V n ∖ Class ( ψ , n ) ψ 1 ∧ ψ 2 : r e t u r n Class ( ψ 1 , n ) ∩ Class ( ψ 2 , n ) ; ∃ x n + 1 ( ψ ) : r e t u r n D o m ( Class ( ψ , n + 1 ) ) ; // x n + 1 is free in ψ ; Class ( ψ , n + 1 ) // returns a class of ( n + 1 ) -tuples e n d e n d {\displaystyle {\begin{array}{l}\mathbf {function} \;{\text{Class}}(\phi ,\,n)\\\quad {\begin{array}{rl}\mathbf {input} \!:\;\,&\phi {\text{ is a transformed formula of the form }}\phi (x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m});\\&n{\text{ specifies that a class of }}n{\text{-tuples is returned.}}\\\;\;\;\;\mathbf {output} \!:\;\,&{\text{class }}A{\text{ of }}n{\text{-tuples satisfying }}\\&\,\forall x_{1}\cdots \,\forall x_{n}[(x_{1},\ldots ,x_{n})\in A\iff \phi (x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m})].\end{array}}\\\mathbf {begin} \\\quad \mathbf {case} \;\phi \;\mathbf {of} \\\qquad {\begin{alignedat}{2}x_{i}\in x_{j}:\;\;&\mathbf {return} \;\,E_{i,j,n};&&{\text{// }}E_{i,j,n}\;\,=\{(x_{1},\dots ,x_{n}):x_{i}\in x_{j}\}\\x_{i}\in Y_{k}:\;\;&\mathbf {return} \;\,E_{i,Y_{k},n};&&{\text{// }}E_{i,Y_{k},n}=\{(x_{1},\dots ,x_{n}):x_{i}\in Y_{k}\}\\\neg \psi :\;\;&\mathbf {return} \;\,\complement _{V^{n}}{\text{Class}}(\psi ,\,n);&&{\text{// }}\complement _{V^{n}}{\text{Class}}(\psi ,\,n)=V^{n}\setminus {\text{Class}}(\psi ,\,n)\\\psi _{1}\land \psi _{2}:\;\;&\mathbf {return} \;\,{\text{Class}}(\psi _{1},\,n)\cap {\text{Class}}(\psi _{2},\,n);&&\\\;\;\;\;\,\exists x_{n+1}(\psi ):\;\;&\mathbf {return} \;\,Dom({\text{Class}}(\psi ,\,n+1));&&{\text{// }}x_{n+1}{\text{ is free in }}\psi ;{\text{ Class}}(\psi ,\,n+1)\\&\ &&{\text{// returns a class of }}(n+1){\text{-tuples}}\end{alignedat}}\\\quad \mathbf {end} \\\mathbf {end} \end{array}}} Let ϕ {\displaystyle \phi } be the formula of example 2 . The function call A = C l a s s ( ϕ , 1 ) {\displaystyle A=Class(\phi ,1)} generates the class A , {\displaystyle A,} which is compared below with ϕ . {\displaystyle \phi .} This shows that the construction of the class A {\displaystyle A} mirrors the construction of its defining formula ϕ . {\displaystyle \phi .} ϕ = ∃ x 2 ( x 2 ∈ x 1 ∧ ¬ ∃ x 3 ( x 3 ∈ x 2 ) ) ∧ ∃ x 2 ( x 2 ∈ x 1 ∧ ∃ x 3 ( x 3 ∈ x 2 ∧ ¬ ∃ x 4 ( x 4 ∈ x 3 ) ) ) A = D o m ( E 2 , 1 , 2 ∩ ∁ V 2 D o m ( E 3 , 2 , 3 ) ) ∩ D o m ( E 2 , 1 , 2 ∩ D o m ( E 3 , 2 , 3 ∩ ∁ V 3 D o m ( E 4 , 3 , 4 ) ) ) {\displaystyle {\begin{alignedat}{2}&\phi \;&&=\;\;\exists x_{2}\,(x_{2}\!\in \!x_{1}\land \;\;\neg \;\;\;\;\exists x_{3}\;(x_{3}\!\in \!x_{2}))\,\land \;\;\,\exists x_{2}\,(x_{2}\!\in \!x_{1}\land \;\;\,\exists x_{3}\,(x_{3}\!\in \!x_{2}\,\land \;\;\neg \;\;\;\;\exists x_{4}\;(x_{4}\!\in \!x_{3})))\\&A\;&&=Dom\,(\;E_{2,1,2}\;\cap \;\complement _{V^{2}}\,Dom\,(\;E_{3,2,3}\;))\,\cap \,Dom\,(\;E_{2,1,2}\;\cap \,Dom\,(\;\,E_{3,2,3}\;\cap \;\complement _{V^{3}}\,Dom\,(\;E_{4,3,4}\;)))\end{alignedat}}} Gödel extended the class existence theorem to formulas ϕ {\displaystyle \phi } containing relations over classes (such as Y 1 ⊆ Y 2 {\displaystyle Y_{1}\subseteq Y_{2}} and the unary relation M ( Y 1 ) {\displaystyle M(Y_{1})} ), special classes (such as O r d {\displaystyle Ord} ), and operations (such as ( x 1 , x 2 ) {\displaystyle (x_{1},x_{2})} and x 1 ∩ Y 1 {\displaystyle x_{1}\cap Y_{1}} ). [ 32 ] To extend the class existence theorem, the formulas defining relations, special classes, and operations must quantify only over sets. Then ϕ {\displaystyle \phi } can be transformed into an equivalent formula satisfying the hypothesis of the class existence theorem . The following definitions specify how formulas define relations, special classes, and operations: A term is defined by: The following transformation rules eliminate relations, special classes, and operations. Each time rule 2b, 3b, or 4 is applied to a subformula, i {\displaystyle i} is chosen so that z i {\displaystyle z_{i}} differs from the other variables in the current formula. The rules are repeated until there are no subformulas to which they can be applied. Γ 1 , … , Γ k , Γ , {\displaystyle \,\Gamma _{1},\dots ,\Gamma _{k},\Gamma ,} and Δ {\displaystyle \Delta } denote terms. Y 1 ⊆ Y 2 ⟺ ∀ z 1 ( z 1 ∈ Y 1 ⟹ z 1 ∈ Y 2 ) (rule 1) {\displaystyle Y_{1}\subseteq Y_{2}\iff \forall z_{1}(z_{1}\in Y_{1}\implies z_{1}\in Y_{2})\quad {\text{(rule 1)}}} x 1 ∩ Y 1 ∈ x 2 ⟺ ∃ z 1 [ z 1 = x 1 ∩ Y 1 ∧ z 1 ∈ x 2 ] (rule 3b) ⟺ ∃ z 1 [ ∀ z 2 ( z 2 ∈ z 1 ⟺ z 2 ∈ x 1 ∩ Y 1 ) ∧ z 1 ∈ x 2 ] (rule 4) ⟺ ∃ z 1 [ ∀ z 2 ( z 2 ∈ z 1 ⟺ z 2 ∈ x 1 ∧ z 2 ∈ Y 1 ) ∧ z 1 ∈ x 2 ] (rule 3a) {\displaystyle {\begin{alignedat}{2}x_{1}\cap Y_{1}\in x_{2}&\iff \exists z_{1}[z_{1}=x_{1}\cap Y_{1}\,\land \,z_{1}\in x_{2}]&&{\text{(rule 3b)}}\\&\iff \exists z_{1}[\forall z_{2}(z_{2}\in z_{1}\iff z_{2}\in x_{1}\cap Y_{1})\,\land \,z_{1}\in x_{2}]&&{\text{(rule 4)}}\\&\iff \exists z_{1}[\forall z_{2}(z_{2}\in z_{1}\iff z_{2}\in x_{1}\land z_{2}\in Y_{1})\,\land \,z_{1}\in x_{2}]\quad &&{\text{(rule 3a)}}\\\end{alignedat}}} This example illustrates how the transformation rules work together to eliminate an operation. Class existence theorem (extended version) — Let ϕ ( x 1 , … , x n , Y 1 , … , Y m ) {\displaystyle \phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})} be a formula that quantifies only over sets, contains no free variables other than x 1 , … , x n , Y 1 , … , Y m {\displaystyle x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m}} , and may contain relations, special classes, and operations defined by formulas that quantify only over sets. Then for all Y 1 , … , Y m , {\displaystyle Y_{1},\dots ,Y_{m},} there exists a unique class A {\displaystyle A} of n {\displaystyle n} -tuples such that ∀ x 1 ⋯ ∀ x n [ ( x 1 , … , x n ) ∈ A ⟺ ϕ ( x 1 , … , x n , Y 1 , … , Y m ) ] . {\displaystyle \forall x_{1}\cdots \,\forall x_{n}[(x_{1},\dots ,x_{n})\in A\iff \phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})].} [ j ] Apply the transformation rules to ϕ {\displaystyle \phi } to produce an equivalent formula containing no relations, special classes, or operations. This formula satisfies the hypothesis of the class existence theorem. Therefore, for all Y 1 , … , Y m , {\displaystyle Y_{1},\dots ,Y_{m},} there is a unique class A {\displaystyle A} of n {\displaystyle n} -tuples satisfying ∀ x 1 ⋯ ∀ x n [ ( x 1 , … , x n ) ∈ A ⟺ ϕ ( x 1 , … , x n , Y 1 , … , Y m ) ] . {\displaystyle \forall x_{1}\cdots \,\forall x_{n}[(x_{1},\dots ,x_{n})\in A\iff \phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})].} The axioms of pairing and regularity, which were needed for the proof of the class existence theorem, have been given above. NBG contains four other set axioms. Three of these axioms deal with class operations being applied to sets. Definition. F {\displaystyle F} is a function if F ⊆ V 2 ∧ ∀ x ∀ y ∀ z [ ( x , y ) ∈ F ∧ ( x , z ) ∈ F ⟹ y = z ] . {\displaystyle F\subseteq V^{2}\land \forall x\,\forall y\,\forall z\,[(x,y)\in F\,\land \,(x,z)\in F\implies y=z].} In set theory, the definition of a function does not require specifying the domain or codomain of the function (see Function (set theory) ). NBG's definition of function generalizes ZFC's definition from a set of ordered pairs to a class of ordered pairs. ZFC's definitions of the set operations of image , union , and power set are also generalized to class operations. The image of class A {\displaystyle A} under the function F {\displaystyle F} is F [ A ] = { y : ∃ x ( x ∈ A ∧ ( x , y ) ∈ F ) } . {\displaystyle F[A]=\{y:\exists x(x\in A\,\land \,(x,y)\in F)\}.} This definition does not require that A ⊆ D o m ( F ) . {\displaystyle A\subseteq Dom(F).} The union of class A {\displaystyle A} is ∪ A = { x : ∃ y ( x ∈ y ∧ y ∈ A ) } . {\displaystyle \cup A=\{x:\exists y(x\in y\,\,\land \,y\in A)\}.} The power class of A {\displaystyle A} is P ( A ) = { x : x ⊆ A } . {\displaystyle {\mathcal {P}}(A)=\{x:x\subseteq A\}.} The extended version of the class existence theorem implies the existence of these classes. The axioms of replacement, union , and power set imply that when these operations are applied to sets, they produce sets. [ 34 ] Axiom of replacement. If F {\displaystyle F} is a function and a {\displaystyle a} is a set, then F [ a ] {\displaystyle F[a]} , the image of a {\displaystyle a} under F {\displaystyle F} , is a set. ∀ F ∀ a [ F is a function ⟹ ∃ b ∀ y ( y ∈ b ⟺ ∃ x ( x ∈ a ∧ ( x , y ) ∈ F ) ) ] . {\displaystyle \forall F\,\forall a\,[F{\text{ is a function}}\implies \exists b\,\forall y\,(y\in b\iff \exists x(x\in a\,\land \,(x,y)\in F))].} Not having the requirement A ⊆ D o m ( F ) {\displaystyle A\subseteq Dom(F)} in the definition of F [ A ] {\displaystyle F[A]} produces a stronger axiom of replacement, which is used in the following proof. Theorem (NBG's axiom of separation ) — If a {\displaystyle a} is a set and B {\displaystyle B} is a subclass of a , {\displaystyle a,} then B {\displaystyle B} is a set. The class existence theorem constructs the restriction of the identity function to B {\displaystyle B} : I ↾ B = { ( x 1 , x 2 ) : x 1 ∈ B ∧ x 2 = x 1 } . {\displaystyle I{\upharpoonright _{B}}=\{(x_{1},x_{2}):x_{1}\in B\land x_{2}=x_{1}\}.} Since the image of a {\displaystyle a} under I ↾ B {\displaystyle I{\upharpoonright _{B}}} is B {\displaystyle B} , the axiom of replacement implies that B {\displaystyle B} is a set. This proof depends on the definition of image not having the requirement a ⊆ D o m ( F ) {\displaystyle a\subseteq Dom(F)} since D o m ( I ↾ B ) = B ⊆ a {\displaystyle Dom(I{\upharpoonright _{B}})=B\subseteq a} rather than a ⊆ D o m ( I ↾ B ) . {\displaystyle a\subseteq Dom(I{\upharpoonright _{B}}).} Axiom of union. If a {\displaystyle a} is a set, then there is a set containing ∪ a . {\displaystyle \cup a.} ∀ a ∃ b ∀ x [ ∃ y ( x ∈ y ∧ y ∈ a ) ⟹ x ∈ b ] . {\displaystyle \forall a\,\exists b\,\forall x\,[\,\exists y(x\in y\,\,\land \,y\in a)\implies x\in b\,].} Axiom of power set. If a {\displaystyle a} is a set, then there is a set containing P ( a ) . {\displaystyle {\mathcal {P}}(a).} Theorem — If a {\displaystyle a} is a set, then ∪ a {\displaystyle \cup a} and P ( a ) {\displaystyle {\mathcal {P}}(a)} are sets. The axiom of union states that ∪ a {\displaystyle \cup a} is a subclass of a set b {\displaystyle b} , so the axiom of separation implies ∪ a {\displaystyle \cup a} is a set. Likewise, the axiom of power set states that P ( a ) {\displaystyle {\mathcal {P}}(a)} is a subclass of a set b {\displaystyle b} , so the axiom of separation implies that P ( a ) {\displaystyle {\mathcal {P}}(a)} is a set. Axiom of infinity. There exists a nonempty set a {\displaystyle a} such that for all x {\displaystyle x} in a {\displaystyle a} , there exists a y {\displaystyle y} in a {\displaystyle a} such that x {\displaystyle x} is a proper subset of y {\displaystyle y} . ∃ a [ ∃ u ( u ∈ a ) ∧ ∀ x ( x ∈ a ⟹ ∃ y ( y ∈ a ∧ x ⊂ y ) ) ] . {\displaystyle \exists a\,[\exists u(u\in a)\,\land \,\forall x(x\in a\implies \exists y(y\in a\,\land \,x\subset y))].} The axioms of infinity and replacement prove the existence of the empty set . In the discussion of the class existence axioms , the existence of the empty class ∅ {\displaystyle \emptyset } was proved. We now prove that ∅ {\displaystyle \emptyset } is a set. Let function F = ∅ {\displaystyle F=\emptyset } and let a {\displaystyle a} be the set given by the axiom of infinity. By replacement, the image of a {\displaystyle a} under F {\displaystyle F} , which equals ∅ {\displaystyle \emptyset } , is a set. NBG's axiom of infinity is implied by ZFC's axiom of infinity : ∃ a [ ∅ ∈ a ∧ ∀ x ( x ∈ a ⟹ x ∪ { x } ∈ a ) ] . {\displaystyle \,\exists a\,[\emptyset \in a\,\land \,\forall x(x\in a\implies x\cup \{x\}\in a)].\,} The first conjunct of ZFC's axiom, ∅ ∈ a {\displaystyle \emptyset \in a} , implies the first conjunct of NBG's axiom. The second conjunct of ZFC's axiom, ∀ x ( x ∈ a ⟹ x ∪ { x } ∈ a ) {\displaystyle \forall x(x\in a\implies x\cup \{x\}\in a)} , implies the second conjunct of NBG's axiom since x ⊂ x ∪ { x } . {\displaystyle x\subset x\cup \{x\}.} To prove ZFC's axiom of infinity from NBG's axiom of infinity requires some of the other NBG axioms (see Weak axiom of infinity ). [ l ] The class concept allows NBG to have a stronger axiom of choice than ZFC. A choice function is a function f {\displaystyle f} defined on a set s {\displaystyle s} of nonempty sets such that f ( x ) ∈ x {\displaystyle f(x)\in x} for all x ∈ s . {\displaystyle x\in s.} ZFC's axiom of choice states that there exists a choice function for every set of nonempty sets. A global choice function is a function G {\displaystyle G} defined on the class of all nonempty sets such that G ( x ) ∈ x {\displaystyle G(x)\in x} for every nonempty set x . {\displaystyle x.} The axiom of global choice states that there exists a global choice function. This axiom implies ZFC's axiom of choice since for every set s {\displaystyle s} of nonempty sets, G | s {\displaystyle G\vert _{s}} (the restriction of G {\displaystyle G} to s {\displaystyle s} ) is a choice function for s . {\displaystyle s.} In 1964, William B. Easton proved that global choice is stronger than the axiom of choice by using forcing to construct a model that satisfies the axiom of choice and all the axioms of NBG except the axiom of global choice. [ 38 ] The axiom of global choice is equivalent to every class having a well-ordering, while ZFC's axiom of choice is equivalent to every set having a well-ordering. [ m ] Axiom of global choice. There exists a function that chooses an element from every nonempty set. Von Neumann published an introductory article on his axiom system in 1925. In 1928, he provided a detailed treatment of his system. [ 39 ] Von Neumann based his axiom system on two domains of primitive objects: functions and arguments. These domains overlap—objects that are in both domains are called argument-functions. Functions correspond to classes in NBG, and argument-functions correspond to sets. Von Neumann's primitive operation is function application , denoted by [ a , x ] rather than a ( x ) where a is a function and x is an argument. This operation produces an argument. Von Neumann defined classes and sets using functions and argument-functions that take only two values, A and B . He defined x ∈ a if [ a , x ] ≠ A . [ 1 ] Von Neumann's work in set theory was influenced by Georg Cantor 's articles, Ernst Zermelo's 1908 axioms for set theory , and the 1922 critiques of Zermelo 's set theory that were given independently by Abraham Fraenkel and Thoralf Skolem . Both Fraenkel and Skolem pointed out that Zermelo's axioms cannot prove the existence of the set { Z 0 , Z 1 , Z 2 , ...} where Z 0 is the set of natural numbers and Z n +1 is the power set of Z n . They then introduced the axiom of replacement, which would guarantee the existence of such sets. [ 40 ] [ n ] However, they were reluctant to adopt this axiom: Fraenkel stated "that Replacement was too strong an axiom for 'general set theory'", while "Skolem only wrote that 'we could introduce' Replacement". [ 42 ] Von Neumann worked on the problems of Zermelo set theory and provided solutions for some of them: In 1929, von Neumann published an article containing the axioms that would lead to NBG. This article was motivated by his concern about the consistency of the axiom of limitation of size. He stated that this axiom "does a lot, actually too much." Besides implying the axioms of separation and replacement, and the well-ordering theorem , it also implies that any class whose cardinality is less than that of V is a set. Von Neumann thought that this last implication went beyond Cantorian set theory and concluded: "We must therefore discuss whether its [the axiom's] consistency is not even more problematic than an axiomatization of set theory that does not go beyond the necessary Cantorian framework." [ 57 ] Von Neumann started his consistency investigation by introducing his 1929 axiom system, which contains all the axioms of his 1925 axiom system except the axiom of limitation of size. He replaced this axiom with two of its consequences, the axiom of replacement and a choice axiom. Von Neumann's choice axiom states: "Every relation R has a subclass that is a function with the same domain as R ." [ 58 ] Let S be von Neumann's 1929 axiom system. Von Neumann introduced the axiom system S + Regularity (which consists of S and the axiom of regularity) to demonstrate that his 1925 system is consistent relative to S . He proved: These results imply: If S is consistent, then von Neumann's 1925 axiom system is consistent. Proof: If S is consistent, then S + Regularity is consistent (result 1). Using proof by contradiction , assume that the 1925 axiom system is inconsistent, or equivalently: the 1925 axiom system implies a contradiction. Since S + Regularity implies the axioms of the 1925 system (result 2), S + Regularity also implies a contradiction. However, this contradicts the consistency of S + Regularity. Therefore, if S is consistent, then von Neumann's 1925 axiom system is consistent. Since S is his 1929 axiom system, von Neumann's 1925 axiom system is consistent relative to his 1929 axiom system, which is closer to Cantorian set theory. The major differences between Cantorian set theory and the 1929 axiom system are classes and von Neumann's choice axiom. The axiom system S + Regularity was modified by Bernays and Gödel to produce the equivalent NBG axiom system. In 1929, Paul Bernays started modifying von Neumann's new axiom system by taking classes and sets as primitives. He published his work in a series of articles appearing from 1937 to 1954. [ 59 ] Bernays stated that: The purpose of modifying the von Neumann system is to remain nearer to the structure of the original Zermelo system and to utilize at the same time some of the set-theoretic concepts of the Schröder logic and of Principia Mathematica which have become familiar to logicians. As will be seen, a considerable simplification results from this arrangement. [ 60 ] Bernays handled sets and classes in a two-sorted logic and introduced two membership primitives: one for membership in sets and one for membership in classes. With these primitives, he rewrote and simplified von Neumann's 1929 axioms. Bernays also included the axiom of regularity in his axiom system. [ 61 ] In 1931, Bernays sent a letter containing his set theory to Kurt Gödel . [ 36 ] Gödel simplified Bernays' theory by making every set a class, which allowed him to use just one sort and one membership primitive. He also weakened some of Bernays' axioms and replaced von Neumann's choice axiom with the equivalent axiom of global choice. [ 62 ] [ v ] Gödel used his axioms in his 1940 monograph on the relative consistency of global choice and the generalized continuum hypothesis. [ 63 ] Several reasons have been given for Gödel choosing NBG for his monograph: [ w ] Gödel's achievement together with the details of his presentation led to the prominence that NBG would enjoy for the next two decades. [ 70 ] In 1963, Paul Cohen proved his independence proofs for ZF with the help of some tools that Gödel had developed for his relative consistency proofs for NBG. [ 71 ] Later, ZFC became more popular than NBG. This was caused by several factors, including the extra work required to handle forcing in NBG, [ 72 ] Cohen's 1966 presentation of forcing, which used ZF, [ 73 ] [ y ] and the proof that NBG is a conservative extension of ZFC. [ z ] NBG is not logically equivalent to ZFC because its language is more expressive: it can make statements about classes, which cannot be made in ZFC. However, NBG and ZFC imply the same statements about sets. Therefore, NBG is a conservative extension of ZFC. NBG implies theorems that ZFC does not imply, but since NBG is a conservative extension, these theorems must involve proper classes. For example, it is a theorem of NBG that the global axiom of choice implies that the proper class V can be well-ordered and that every proper class can be put into one-to-one correspondence with V . [ aa ] One consequence of conservative extension is that ZFC and NBG are equiconsistent . Proving this uses the principle of explosion : from a contradiction , everything is provable. Assume that either ZFC or NBG is inconsistent. Then the inconsistent theory implies the contradictory statements ∅ = ∅ and ∅ ≠ ∅, which are statements about sets. By the conservative extension property, the other theory also implies these statements. Therefore, it is also inconsistent. So although NBG is more expressive, it is equiconsistent with ZFC. This result together with von Neumann's 1929 relative consistency proof implies that his 1925 axiom system with the axiom of limitation of size is equiconsistent with ZFC. This completely resolves von Neumann's concern about the relative consistency of this powerful axiom since ZFC is within the Cantorian framework. Even though NBG is a conservative extension of ZFC, a theorem may have a shorter and more elegant proof in NBG than in ZFC (or vice versa). For a survey of known results of this nature, see Pudlák 1998 . Morse–Kelley set theory has an axiom schema of class comprehension that includes formulas whose quantifiers range over classes. MK is a stronger theory than NBG because MK proves the consistency of NBG, [ 76 ] while Gödel's second incompleteness theorem implies that NBG cannot prove the consistency of NBG. For a discussion of some ontological and other philosophical issues posed by NBG, especially when contrasted with ZFC and MK, see Appendix C of Potter 2004 . ZFC, NBG, and MK have models describable in terms of the cumulative hierarchy V α and the constructible hierarchy L α . Let V include an inaccessible cardinal κ, let X ⊆ V κ , and let Def( X ) denote the class of first-order definable subsets of X with parameters. In symbols where " ( X , ∈ ) {\displaystyle (X,\in )} " denotes the model with domain X {\displaystyle X} and relation ∈ {\displaystyle \in } , and " ⊨ {\displaystyle \models } " denotes the satisfaction relation : Def ⁡ ( X ) := { { x ∣ x ∈ X and ( X , ∈ ) ⊨ ϕ ( x , y 1 , … , y n ) } : ϕ is a first-order formula and y 1 , … , y n ∈ X } . {\displaystyle \operatorname {Def} (X):={\Bigl \{}\{x\mid x\in X{\text{ and }}(X,\in )\models \phi (x,y_{1},\ldots ,y_{n})\}:\phi {\text{ is a first-order formula and }}y_{1},\ldots ,y_{n}\in X{\Bigr \}}.} Then: The ontology of NBG provides scaffolding for speaking about "large objects" without risking paradox. For instance, in some developments of category theory , a " large category " is defined as one whose objects and morphisms make up a proper class. On the other hand, a "small category" is one whose objects and morphisms are members of a set. Thus, we can speak of the " category of all sets " or " category of all small categories " without risking paradox since NBG supports large categories. However, NBG does not support a "category of all categories" since large categories would be members of it and NBG does not allow proper classes to be members of anything. An ontological extension that enables us to talk formally about such a "category" is the conglomerate , which is a collection of classes. Then the "category of all categories" is defined by its objects: the conglomerate of all categories; and its morphisms: the conglomerate of all morphisms from A to B where A and B are objects. [ 83 ] On whether an ontology including classes as well as sets is adequate for category theory, see Muller 2001 .
https://en.wikipedia.org/wiki/Von_Neumann–Bernays–Gödel_set_theory
Empirical methods Prescriptive and policy In decision theory , the von Neumann–Morgenstern ( VNM ) utility theorem demonstrates that rational choice under uncertainty involves making decisions that take the form of maximizing the expected value of some cardinal utility function. The theorem forms the foundation of expected utility theory . In 1947, John von Neumann and Oskar Morgenstern proved that any individual whose preferences satisfied four axioms has a utility function , where such an individual's preferences can be represented on an interval scale and the individual will always prefer actions that maximize expected utility. [ 1 ] That is, they proved that an agent is (VNM-)rational if and only if there exists a real-valued function u defined by possible outcomes such that every preference of the agent is characterized by maximizing the expected value of u , which can then be defined as the agent's VNM-utility (it is unique up to affine transformations i.e. adding a constant and multiplying by a positive scalar). No claim is made that the agent has a "conscious desire" to maximize u , only that u exists. VNM-utility is a decision utility in that it is used to describe decisions . It is related, but not necessarily equivalent, to the utility of Bentham 's utilitarianism . [ 2 ] In the theorem, an individual agent is faced with options called lotteries . Given some mutually exclusive outcomes, a lottery is a scenario where each outcome will happen with a given probability , all probabilities summing to one. For example, for two outcomes A and B , denotes a scenario where P ( A ) = 25% is the probability of A occurring and P ( B ) = 75% (and exactly one of them will occur). More generally, for a lottery with many possible outcomes A i , we write: with the sum of the p i {\displaystyle p_{i}} s equal to 1. The outcomes in a lottery can themselves be lotteries between other outcomes, and the expanded expression is considered an equivalent lottery: 0.5(0.5 A + 0.5 B ) + 0.5 C = 0.25 A + 0.25 B + 0.50 C . If lottery M is preferred over lottery L , we write M ≻ L {\displaystyle M\succ L} , or equivalently, L ≺ M {\displaystyle L\prec M} . If the agent is indifferent between L and M , we write the indifference relation [ 3 ] L ∼ M . {\displaystyle L\sim M.} If M is either preferred over or viewed with indifference relative to L , we write L ⪯ M . {\displaystyle L\preceq M.} The four axioms of VNM-rationality are completeness , transitivity , continuity , and independence . These axioms, apart from continuity, are often justified using the Dutch book theorems (whereas continuity is used to set aside lexicographic or infinitesimal utilities). Completeness assumes that an individual has well defined preferences: (the individual must express some preference or indifference [ 4 ] ). Note that this implies reflexivity . Transitivity assumes that preferences are consistent across any three options: Axiom 1 and Axiom 2 together can be restated as the statement that the individual's preference is a total preorder . Continuity assumes that there is a "tipping point" between being better than and worse than a given middle option: where the notation on the left side refers to a situation in which L is received with probability p and N is received with probability (1– p ). Instead of continuity, an alternative axiom can be assumed that does not involve a precise equality, called the Archimedean property . [ 3 ] It says that any separation in preference can be maintained under a sufficiently small deviation in probabilities: Only one of (3) or (3′) need to be assumed, and the other will be implied by the theorem. Independence assumes that a preference holds independently of the probability of another outcome. In other words, the probabilities involving M {\displaystyle M} cancel out and don't affect our decision, because the probability of M {\displaystyle M} is the same in both lotteries. Note that the "only if" direction is necessary for the theorem to work. Without that, we have this counterexample: there are only two outcomes A , B {\displaystyle A,B} , and the agent is indifferent on { p A + ( 1 − p ) B : p ∈ [ 0 , 1 ) } {\displaystyle \{pA+(1-p)B:p\in [0,1)\}} , and strictly prefers all of them over A {\displaystyle A} . With the "only if" direction, we can argue that 1 2 A + 1 2 B ⪰ 1 2 B + 1 2 B {\displaystyle {\frac {1}{2}}A+{\frac {1}{2}}B\succeq {\frac {1}{2}}B+{\frac {1}{2}}B} implies A ⪰ B {\displaystyle A\succeq B} , thus excluding this counterexample. The independence axiom implies the axiom on reduction of compound lotteries: [ 5 ] To see how Axiom 4 implies Axiom 4', set M = q L ′ + ( 1 − q ) N ′ {\displaystyle M=qL'+(1-q)N'} in the expression in Axiom 4, and expand. For any VNM-rational agent (i.e. satisfying axioms 1–4), there exists a function u which assigns to each outcome A a real number u(A) such that for any two lotteries, where E(u(L)) , or more briefly Eu ( L ) is given by As such, u can be uniquely determined (up to adding a constant and multiplying by a positive scalar) by preferences between simple lotteries , meaning those of the form pA + (1 − p ) B having only two outcomes. Conversely, any agent acting to maximize the expectation of a function u will obey axioms 1–4. Such a function is called the agent's von Neumann–Morgenstern (VNM) utility . The proof is constructive: it shows how the desired function u {\displaystyle u} can be built. Here we outline the construction process for the case in which the number of sure outcomes is finite. [ 6 ] : 132–134 Suppose there are n sure outcomes, A 1 … A n {\displaystyle A_{1}\dots A_{n}} . Note that every sure outcome can be seen as a lottery: it is a degenerate lottery in which the outcome is selected with probability 1. Hence, by the Completeness and Transitivity axioms, it is possible to order the outcomes from worst to best: We assume that at least one of the inequalities is strict (otherwise the utility function is trivial—a constant). So A 1 ≺ A n {\displaystyle A_{1}\prec A_{n}} . We use these two extreme outcomes—the worst and the best—as the scaling unit of our utility function, and define: For every probability p ∈ [ 0 , 1 ] {\displaystyle p\in [0,1]} , define a lottery that selects the best outcome with probability p {\displaystyle p} and the worst outcome otherwise: Note that L ( 0 ) ∼ A 1 {\displaystyle L(0)\sim A_{1}} and L ( 1 ) ∼ A n {\displaystyle L(1)\sim A_{n}} . By the Continuity axiom, for every sure outcome A i {\displaystyle A_{i}} , there is a probability q i {\displaystyle q_{i}} such that: and For every i {\displaystyle i} , the utility function for outcome A i {\displaystyle A_{i}} is defined as so the utility of every lottery M = ∑ i p i A i {\displaystyle M=\sum _{i}p_{i}A_{i}} is the expectation of u : To see why this utility function makes sense, consider a lottery M = ∑ i p i A i {\displaystyle M=\sum _{i}p_{i}A_{i}} , which selects outcome A i {\displaystyle A_{i}} with probability p i {\displaystyle p_{i}} . But, by our assumption, the decision maker is indifferent between the sure outcome A i {\displaystyle A_{i}} and the lottery q i ⋅ A n + ( 1 − q i ) ⋅ A 1 {\displaystyle q_{i}\cdot A_{n}+(1-q_{i})\cdot A_{1}} . So, by the Reduction axiom, he is indifferent between the lottery M {\displaystyle M} and the following lottery: The lottery M ′ {\displaystyle M'} is, in effect, a lottery in which the best outcome is won with probability u ( M ) {\displaystyle u(M)} , and the worst outcome otherwise. Hence, if u ( M ) > u ( L ) {\displaystyle u(M)>u(L)} , a rational decision maker would prefer the lottery M {\displaystyle M} over the lottery L {\displaystyle L} , because it gives him a larger chance to win the best outcome. Hence: Von Neumann and Morgenstern anticipated surprise at the strength of their conclusion. But according to them, the reason their utility function works is that it is constructed precisely to fill the role of something whose expectation is maximized: "Many economists will feel that we are assuming far too much ... Have we not shown too much? ... As far as we can see, our postulates [are] plausible ... We have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate." – VNM 1953, § 3.1.1 p.16 and § 3.7.1 p. 28 [ 1 ] Thus, the content of the theorem is that the construction of u is possible, and they claim little about its nature. It is often the case that a person, faced with real-world gambles with money, does not act to maximize the expected value of their dollar assets. For example, a person who only possesses $1000 in savings may be reluctant to risk it all for a 20% chance odds to win $10,000, even though However, if the person is VNM-rational, such facts are automatically accounted for in their utility function u . In this example, we could conclude that where the dollar amounts here really represent outcomes (cf. " value "), the three possible situations the individual could face. In particular, u can exhibit properties like u ($1)+ u ($1) ≠ u ($2) without contradicting VNM-rationality at all. This leads to a quantitative theory of monetary risk aversion. In 1738, Daniel Bernoulli published a treatise [ 7 ] in which he posits that rational behavior can be described as maximizing the expectation of a function u , which in particular need not be monetary-valued, thus accounting for risk aversion. This is the expected utility hypothesis . As stated, the hypothesis may appear to be a bold claim. The aim of the expected utility theorem is to provide "modest conditions" (i.e. axioms) describing when the expected utility hypothesis holds, which can be evaluated directly and intuitively: "The axioms should not be too numerous, their system is to be as simple and transparent as possible, and each axiom should have an immediate intuitive meaning by which its appropriateness may be judged directly. In a situation like ours this last requirement is particularly vital, in spite of its vagueness: we want to make an intuitive concept amenable to mathematical treatment and to see as clearly as possible what hypotheses this requires." – VNM 1953 § 3.5.2, p. 25 [ 1 ] As such, claims that the expected utility hypothesis does not characterize rationality must reject one of the VNM axioms. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom. Because the theorem assumes nothing about the nature of the possible outcomes of the gambles, they could be morally significant events, for instance involving the life, death, sickness, or health of others. A von Neumann–Morgenstern rational agent is capable of acting with great concern for such events, sacrificing much personal wealth or well-being, and all of these actions will factor into the construction/definition of the agent's VNM-utility function. In other words, both what is naturally perceived as "personal gain", and what is naturally perceived as "altruism", are implicitly balanced in the VNM-utility function of a VNM-rational individual. Therefore, the full range of agent-focused to agent-neutral behaviors are possible with various VNM-utility functions [ clarification needed ] . If the utility of N {\displaystyle N} is p M {\displaystyle pM} , a von Neumann–Morgenstern rational agent must be indifferent between 1 N {\displaystyle 1N} and p M + ( 1 − p ) 0 {\displaystyle pM+(1-p)0} . An agent-focused von Neumann–Morgenstern rational agent therefore cannot favor more equal, or "fair", distributions of utility between its own possible future selves. Some utilitarian moral theories are concerned with quantities called the "total utility" and "average utility" of collectives, and characterize morality in terms of favoring the utility or happiness of others with disregard for one's own. These notions can be related to, but are distinct from, VNM-utility: The term E-utility for "experience utility" has been coined [ 2 ] to refer to the types of "hedonistic" utility like that of Bentham 's greatest happiness principle . Since morality affects decisions, a VNM-rational agent's morals will affect the definition of its own utility function (see above). Thus, the morality of a VNM-rational agent can be characterized by correlation of the agent's VNM-utility with the VNM-utility, E-utility, or "happiness" of others, among other means, but not by disregard for the agent's own VNM-utility, a contradiction in terms. Since if L and M are lotteries, then pL + (1 − p ) M is simply "expanded out" and considered a lottery itself, the VNM formalism ignores what may be experienced as "nested gambling". This is related to the Ellsberg problem where people choose to avoid the perception of risks about risks . Von Neumann and Morgenstern recognized this limitation: "...concepts like a specific utility of gambling cannot be formulated free of contradiction on this level. This may seem to be a paradoxical assertion. But anybody who has seriously tried to axiomatize that elusive concept, will probably concur with it." – VNM 1953 § 3.7.1, p. 28 . [ 1 ] Since for any two VNM-agents X and Y , their VNM-utility functions u X and u Y are only determined up to additive constants and multiplicative positive scalars, the theorem does not provide any canonical way to compare the two. Hence expressions like u X ( L ) + u Y ( L ) and u X ( L ) − u Y ( L ) are not canonically defined, nor are comparisons like u X ( L ) < u Y ( L ) canonically true or false. In particular, the aforementioned "total VNM-utility" and "average VNM-utility" of a population are not canonically meaningful without normalization assumptions. The expected utility hypothesis has been shown to have imperfect predictive accuracy in a set of lab based empirical experiments, such as the Allais paradox .
https://en.wikipedia.org/wiki/Von_Neumann–Morgenstern_utility_theorem
The von Richter reaction , also named von Richter rearrangement , is a name reaction in the organic chemistry . It is named after Victor von Richter , who discovered this reaction in year 1871. It is the reaction of aromatic nitro compounds with potassium cyanide in aqueous ethanol to give the product of cine substitution (ring substitution resulting in the entering group positioned adjacent to the previous location of the leaving group) by a carboxyl group. [ 1 ] [ 2 ] [ 3 ] Although it is not generally synthetically useful due to the low chemical yield and formation of numerous side products, its mechanism was of considerable interest, eluding chemists for almost 100 years before the currently accepted one was proposed. The reaction below shows the classic example of the conversion of p- bromonitrobenzene into m- bromobenzoic acid. [ 4 ] The reaction is a type of nucleophilic aromatic substitution . [ 4 ] Besides the bromo derivative, chlorine- and iodine-substituted nitroarenes, as well as more highly substituted derivatives, could also be used as substrates of this reaction. However, yields are generally poor to moderate, with reported percentage yields ranging from 1% to 50%. [ 5 ] [ 6 ] Several reasonable mechanisms were proposed and refuted by mechanistic data before the currently accepted one, shown below, was proposed in 1960 by Rosenblum on the basis of 15 N labeling experiments. [ 7 ] [ 8 ] First, the cyanide attacks the carbon ortho to the nitro group. This is followed by ring closure via nucleophilic attack on the cyano group, after which the imidate intermediate is rearomatized. Ring opening via nitrogen–oxygen bond cleavage yields an ortho-nitroso benzamide, which recyclizes to form a compound containing a nitrogen–nitrogen bond. Elimination of water produces a cyclic azoketone, which undergoes nucleophilic attack by hydroxide to form a tetrahedral intermediate. This intermediate collapses with the elimination of the azo group to yield an aryldiazene with an ortho carboxylate group, which extrudes nitrogen gas to afford the anionic form of the observed benzoic acid product, presumably through the generation and immediate protonation of an aryl anion intermediate. The product is isolated upon acidic workup. Subsequent mechanistic studies have shown that the subjection of independently prepared ortho- nitroso benzamide and azoketone intermediates to von Richter reaction conditions afforded the expected product, lending further support to this proposal. [ 9 ]
https://en.wikipedia.org/wiki/Von_Richter_reaction
Von Stahel und Eysen (English: On Steel and Iron ) is the first known printed book on metallurgy, published in 1532 by several publishers: Kunegunde Hergot in Nuremberg , Melchior Sachs in Erfurt , and Peter Jordan in Mainz . It has been suggested that Hergot was probably the first to publish the text, as the material seems to come from Nuremberg: its material on tempering and quenching is similar to the short treatise on hardening iron beginning 'Von dem herten. Nu spricht meister Alkaym' in the late fourteenth- or early fifteenth-century Nuremberg manuscript Nürnberger Handschrift GNM 3227a . [ 1 ] About half the text is on how to harden iron and steel through tempering and quenching , mentioning water, but also a range of recipes of varying degrees of elaborateness. [ 2 ] The recipe 'take clarified honey, fresh urine of a he-goat, alum, borax, olive oil, and salt; mix everything well together and quench therein' might, through the urea content of the urine (H 2 NCONH 2 ), have helped to produce nitrated, 'case-hardened' iron. Less likely to have been efficacious is: 'take varnish, dragon's blood, horn scrapings, half as much salt, juice made from earthworms, radish juice, tallow, and vervain and quench therein. It is also very advantageous in hardening if a piece that is to be hardened is first thoroughly cleaned and well polished'. [ 3 ] A modern commentator on some of the more outlandish techniques in the book noted: "There isn't really much to say...except that perhaps it was meant to trip up rivals. However, this may not be the case because similar instructions were circulated in 1708 in Nuremberg ." [ 4 ] The text also includes techniques for colouring, soldering, and etching. Etching was quite a new technology at the time, and Von Stahel und Eysen provides the first attested recipes. [ 5 ] This article about a book on metallurgy or metalworking is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Von_Stahel_und_Eysen
In number theory , the von Staudt–Clausen theorem is a result determining the fractional part of Bernoulli numbers , found independently by Karl von Staudt ( 1840 ) and Thomas Clausen ( 1840 ). Specifically, if n is a positive integer and we add 1/ p to the Bernoulli number B 2 n for every prime p such that p − 1 divides 2 n , then we obtain an integer; that is, B 2 n + ∑ ( p − 1 ) | 2 n 1 p ∈ Z . {\displaystyle B_{2n}+\sum _{(p-1)|2n}{\frac {1}{p}}\in \mathbb {Z} .} This fact immediately allows us to characterize the denominators of the non-zero Bernoulli numbers B 2 n as the product of all primes p such that p − 1 divides 2 n ; consequently, the denominators are square-free and divisible by 6. These denominators are The sequence of integers B 2 n + ∑ ( p − 1 ) | 2 n 1 p {\displaystyle B_{2n}+\sum _{(p-1)|2n}{\frac {1}{p}}} is A proof of the Von Staudt–Clausen theorem follows from an explicit formula for Bernoulli numbers which is: and as a corollary: where S ( n , j ) are the Stirling numbers of the second kind . Furthermore the following lemmas are needed: Let p be a prime number; then 1 . If p – 1 divides 2 n , then 2 . If p – 1 does not divide 2 n , then Proof of (1) and (2) : One has from Fermat's little theorem , for m = 1, 2, ..., p – 1 . If p – 1 divides 2 n , then one has for m = 1, 2, ..., p – 1 . Thereafter, one has from which (1) follows immediately. If p – 1 does not divide 2 n , then after Fermat's theorem one has If one lets ℘ = ⌊ 2 n / ( p – 1) ⌋ , then after iteration one has for m = 1, 2, ..., p – 1 and 0 < 2 n – ℘( p – 1) < p – 1 . Thereafter, one has Lemma (2) now follows from the above and the fact that S ( n , j ) = 0 for j > n . (3) . It is easy to deduce that for a > 2 and b > 2 , ab divides ( ab – 1)! . (4). Stirling numbers of the second kind are integers . Now we are ready to prove the theorem. If j + 1 is composite and j > 3 , then from (3) , j + 1 divides j ! . For j = 3 , If j + 1 is prime, then we use (1) and (2) , and if j + 1 is composite, then we use (3) and (4) to deduce where I n is an integer, as desired. [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Von_Staudt–Clausen_theorem
In mathematics, the Voorhoeve index is a non-negative real number associated with certain functions on the complex numbers , named after Marc Voorhoeve . It may be used to extend Rolle's theorem from real functions to complex functions, taking the role that for real functions is played by the number of zeros of the function in an interval . The Voorhoeve index V I ( f ) {\displaystyle V_{I}(f)} of a complex-valued function f that is analytic in a complex neighbourhood of the real interval I {\displaystyle I} = [ a , b ] is given by (Different authors use different normalization factors.) Rolle's theorem states that if f {\displaystyle f} is a continuously differentiable real-valued function on the real line , and f ( a ) = {\displaystyle f(a)=} f ( b ) = 0 {\displaystyle f(b)=0} , where a < b {\displaystyle a<b} , then its derivative f ′ {\displaystyle f'} has a zero strictly between a {\displaystyle a} and b {\displaystyle b} . Or, more generally, if N I ( f ) {\displaystyle N_{I}(f)} denotes the number of zeros of the continuously differentiable function f {\displaystyle f} on the interval I {\displaystyle I} , then N I ( f ) ≤ N I ( f ′ ) + 1. {\displaystyle N_{I}(f)\leq N_{I}(f')+1.} Now one has the analogue of Rolle's theorem: This leads to bounds on the number of zeros of an analytic function in a complex region.
https://en.wikipedia.org/wiki/Voorhoeve_index
In mathematics , Vopěnka's principle is a large cardinal axiom. The intuition behind the axiom is that the set-theoretical universe is so large that in every proper class , some members are similar to others, with this similarity formalized through elementary embeddings . Vopěnka's principle was first introduced by Petr Vopěnka and independently considered by H. Jerome Keisler , and was written up by Solovay, Reinhardt & Kanamori (1978) . According to Pudlák (2013 , p. 204), Vopěnka's principle was originally intended as a joke: Vopěnka was apparently unenthusiastic about large cardinals and introduced his principle as a bogus large cardinal property, planning to show later that it was not consistent. However, before publishing his inconsistency proof he found a flaw in it. Vopěnka's principle asserts that for every proper class of binary relations (each with set-sized domain), there is one elementarily embeddable into another. This cannot be stated as a single sentence of ZFC as it involves a quantification over classes. A cardinal κ is called a Vopěnka cardinal if it is inaccessible and Vopěnka's principle holds in the rank V κ (allowing arbitrary S ⊂ V κ as "classes"). [ 1 ] Many equivalent formulations are possible. For example, Vopěnka's principle is equivalent to each of the following statements. Even when restricted to predicates and proper classes definable in first order set theory, the principle implies existence of Σ n correct extendible cardinals for every n . If κ is an almost huge cardinal , then a strong form of Vopěnka's principle holds in V κ : This set theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Vopěnka's_principle
Vorlesungen über Zahlentheorie ( German pronunciation: [ˈfoːɐ̯ˌleːzʊŋən ˈyːbɐ ˈtsaːlənteoˌʁiː] ; German for Lectures on Number Theory ) is the name of several different textbooks of number theory . The best known was written by Peter Gustav Lejeune Dirichlet and Richard Dedekind , and published in 1863. Others were written by Leopold Kronecker , Edmund Landau , and Helmut Hasse . They all cover elementary number theory, Dirichlet's theorem, quadratic fields and forms, and sometimes more advanced topics. Based on Dirichlet's number theory course at the University of Göttingen , the Vorlesungen were edited by Dedekind and published after Lejeune Dirichlet's death. Dedekind added several appendices to the Vorlesungen , in which he collected further results of Lejeune Dirichlet's and also developed his own original mathematical ideas. The Vorlesungen cover topics in elementary number theory, algebraic number theory and analytic number theory , including modular arithmetic , quadratic congruences, quadratic reciprocity and binary quadratic forms . The contents of Professor John Stillwell 's 1999 translation of the Vorlesungen are as follows This translation does not include Dedekind's Supplements X and XI in which he begins to develop the theory of ideals . The German titles of supplements X and XI are: Chapters 1 to 4 cover similar ground to Gauss' Disquisitiones Arithmeticae , and Dedekind added footnotes which specifically cross-reference the relevant sections of the Disquisitiones . These chapters can be thought of as a summary of existing knowledge, although Dirichlet simplifies Gauss's presentation, and introduces his own proofs in some places. Chapter 5 contains Dirichlet's derivation of the class number formula for real and imaginary quadratic fields . Although other mathematicians had conjectured similar formulae, Dirichlet gave the first rigorous proof. Supplement VI contains Dirichlet's proof that an arithmetic progression of the form a + nd where a and d are coprime contains an infinite number of primes. The Vorlesungen can be seen as a watershed between the classical number theory of Fermat , Jacobi and Gauss , and the modern number theory of Dedekind, Riemann and Hilbert . Dirichlet does not explicitly recognise the concept of the group that is central to modern algebra , but many of his proofs show an implicit understanding of group theory. The Vorlesungen contains two key results in number theory which were first proved by Dirichlet. The first of these is the class number formulae for binary quadratic forms. The second is a proof that arithmetic progressions contains an infinite number of primes (known as Dirichlet's theorem ); this proof introduces Dirichlet L-series . These results are important milestones in the development of analytic number theory. Leopold Kronecker 's book was first published in 1901 in 2 parts and reprinted by Springer in 1978. It covers elementary and algebraic number theory, including Dirichlet's theorem. Edmund Landau 's book Vorlesungen über Zahlentheorie was first published as a 3-volume set in 1927. The first half of volume 1 was published as Vorlesungen über Zahlentheorie. Aus der elementare Zahlentheorie in 1950, with an English translation in 1958 under the title Elementary number theory . In 1969 Chelsea republished the second half of volume 1 together with volumes 2 and 3 as a single volume. Volume 1 on elementary and additive number theory includes the topics such as Dirichlet's theorem, Brun's sieve, binary quadratic forms, Goldbach's conjecture, Waring's problem, and the Hardy–Littlewood work on the singular series. Volume 2 covers topics in analytic number theory, such as estimates for the error in the prime number theorem, and topics in geometric number theory such as estimating numbers of lattice points. Volume 3 covers algebraic number theory, including ideal theory, quadratic number fields, and applications to Fermat's last theorem. Many of the results described by Landau were state of the art at the time but have since been superseded by stronger results. Helmut Hasse 's book Vorlesungen über Zahlentheorie was published in 1950, and is different from and more elementary than his book Zahlentheorie . It covers elementary number theory, Dirichlet's theorem, and quadratic fields.
https://en.wikipedia.org/wiki/Vorlesungen_über_Zahlentheorie
Voron 2.4 (Russian: ворон , raven) is a CoreXY 3D printer released in May 2020. It has open-source software and hardware , and requires building by the user based on parts sourced individually or in kits from third-party vendors. [ 1 ] The printer has been described as a resurgence of the RepRap culture . [ 2 ] An active user community maintains the specification, shares experiences, improvements and modifications. This contributes to continuous improvement , and there are several types of adaptations, extensions and further developments (for example, the StealthBurner interchangeable tool head ). Voron 2.4 has a reputation for being complex to build [ 3 ] [ 4 ] and requiring considerable effort to operate. [ citation needed ] In return, its open specification and extensive use of off-the-shelf software makes it highly maintainable , modular , and extensible . The Voron project was started by Russian Maks Zolin (pseudonym russiancatfood, RCF) who wanted a better, faster, and quieter 3D printer. [ 5 ] He built a printer and started the company MZ-Bot based on open source ideology. [ 6 ] In 2015, the Voron Geared Extruder was released as the first design to use the Voron name. [ 7 ] In 2015, Zolin sold the first 18 printers as kits (Voron 1.0, later renamed Voron Trident, and quite similar to the later Voron Legacy), and marked them with serial numbers. [ 8 ] In March 2016, the first Voron printer was publicly released [ 7 ] via the company MZ-Bot. [ 6 ] The V24 was an experimental model with a build volume of 24×24×24" (610×610×610 mm). Only two were built, laying the foundation for the later Voron2. [ 7 ] By February 2019, over 100 Voron2 printers had been built and serialized, and a year later in 2020, the number had increased to 350 Voron2 printers. The Voron2.0 was never officially launched. [ 7 ] Zolin found that he did not want to run a company and instead decided to release his work freely, inviting others to collaborate with him. [ 5 ] The tradition of marking new builds with serial numbers has lived on, and users who build their own Voron printer can be assigned their own serial number as proof of the hard work they have put into sourcing parts, assembling, and configuring the printer. In May 2020, Voron2.4 was launched, and over 2500 [ 7 ] printers were registered with serial numbers before the 2.4R2 version was launched in February 2022. [ 9 ] The Voron 2.4 is available as standard in the 250, 300 and 350 versions, which have build volumes of 250×250×250 mm (~15 L), 300×300×300 mm (~27 L) and 350×350×350 mm (~42 L), respectively. It features a closed build chamber, [ 10 ] which provides stable temperatures that are favorable for certain types of 3D printing filament , reduces noise, and allows for controlled exhaust emissions ( HEPA filter extensions are available [ 11 ] ). The CoreXY design results in less moving mass, allowing for higher accelerations and speeds. The belt is based on the CoreXY pattern, but with the belts stacked on top of each other and without the crossover found in some other CoreXY designs, [ 7 ] [ 12 ] which allows for favorable motor placement. [ 7 ] The build manual emphasizes that the two belts should be of the same make and have exactly the same length to achieve consistent tension. [ 7 ] The frame is constructed from lightweight and rigid 2020 aluminum profiles with 6 mm slots, which must meet certain requirements. [ 6 ] Linear-motion guide rails of type MGN7, MGN9 or MGN12 are used along the three axes (alternatively guide rods can be used). [ 8 ] The recommended belts are Gates Unitta 6 mm and/or 9 mm. [ 8 ] A single stack of F695 flange bearings is often used for belt idlers, as the bearings are much larger than standard GT2 belt idlers. [ 8 ] Voron 2.4 has a flying gantry, which differs from most other "pioneer" CoreXY printers (like Rat Rig V-Core, VzBot 330 and Voron Trident). [ 13 ] In other words, the 2.4 model has a stationary print plate and separate belts for moving the print head along the z-axis, while most other CoreXY printers on the other hand have a fixed gantry and a print plate that moves vertically with lead screws . A stationary print plate gives the possibility to use a heavier print plate (for example of thick steel instead of thin aluminium) that warps less when heated. It also gives a more space efficient frame, and makes it easier to calibrate the print to be parallel with the build plate (less need for bed mesh trimming). A disadvantage is that the z-axis may sag when the printer is not in use, but it shall straighten itself again when the printer is turned on. All movement control is done with Klipper software on a Raspberry Pi , which provides great flexibility and extensibility through various parameters that can be programmed in a configuration file. [ 6 ] The printer has the option of automatic calibration to compensate for unevenness in the build plate. The Voron 2.4 can be used for both hobby and professional small-scale production and prototyping. If using high-quality components and taking care to assemble them properly, one can achieve high speed, precision and reliability. Construction of the printer is time-consuming. Examples of things to pay attention to during construction are that the frame is square , using threadlock on screws and proper torque , using precise 3D printed parts, and connecting all the electrical components correctly. [ 14 ]
https://en.wikipedia.org/wiki/Voron_2.4
Voronoi deformation density (VDD) is a method employed in computational chemistry to compute the atomic charge distribution of a molecule in order to provide information about its chemical properties. The method is based on the partitioning of space into non-overlapping atomic areas modelled as Voronoi cells and then computing the deformation density within those cells (i.e. the extent to which electron density differs from that of an unbonded atom). [ 1 ] The VDD charge Q A of atom A is computed as the (numerical) integral of the deformation density ∆ ρ ( r ) = ρ ( r ) – Σ B ρ B ( r ) associated with the formation of the molecule from its atoms over the volume of the Voronoi cell of atom A: The Voronoi cell of atom A is defined as the compartment of space bounded by the bond midplanes on and perpendicular to all bond axes between nucleus A and its neighboring nuclei (cf. the Wigner–Seitz cells in crystals). The Voronoi cell of atom A is therefore the region of space closer to nucleus A than to any other nucleus. Furthermore, ρ ( r ) is the electron density of the molecule and Σ B ρ B ( r ) the superposition of atomic densities ρ B of a fictitious promolecule without chemical interactions that is associated with the situation in which all atoms are neutral. Note that an atomic charge is not a physical observable . Nevertheless, it has been proven a useful means to compactly describe and analyze the electron density distribution in a molecule , which is important for understanding the behavior of the latter. In this connection, it is an asset of VDD atomic charges Q A that they have a rather straightforward and transparent interpretation. Instead of measuring the amount of charge associated with a particular atom A, Q A directly monitors how much charge flows, due to chemical interactions, out of ( Q A > 0) or into ( Q A < 0) the Voronoi cell of atom A, that is, the region of space that is closer to nucleus A than to any other nucleus. This computational chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Voronoi_deformation_density
N. N. Vorozhtsov Novosibirsk Institute of Organic Chemistry of the Siberian Branch of Russian Academy of Sciences ( Russian : Новосибирский институт органической химии имени Н. Н. Ворожцова СО РАН ) is a research institute in Akademgorodok of Novosibirsk , Russia. It was founded in 1958. [ 1 ] Methods for the synthesis of aromatic, organofluorine, heterocyclic and heteroatomic compounds. Study of properties and formation of organic, hybrid and polymer materials. Study of pharmacological properties and mechanisms of action of biologically active agents of natural and synthetic origin etc. [ 1 ]
https://en.wikipedia.org/wiki/Vorozhtsov_Novosibirsk_Institute_of_Organic_Chemistry
Vorsetuzumab mafodotin ( SGN-75 ) is an antibody-drug conjugate (ADC) directed to the protein CD70 designed for the treatment of cancer . [ 1 ] It is a humanized monoclonal antibody, vorsetuzumab, conjugated with noncleavable monomethyl auristatin F (MMAF), a cytotoxic agent . [ citation needed ] This drug was developed by Seattle Genetics , Inc. The drug completed phase I clinical trials for renal cell carcinoma , [ 2 ] but development was discontinued in 2013. [ 3 ] No reason was given but SG plan to start clinical trials of SGN-CD70A in 2014. [ 3 ] This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Vorsetuzumab_mafodotin
In fluid dynamics , a vortex ( pl. : vortices or vortexes ) [ 1 ] [ 2 ] is a region in a fluid in which the flow revolves around an axis line, which may be straight or curved. [ 3 ] [ 4 ] Vortices form in stirred fluids, and may be observed in smoke rings , whirlpools in the wake of a boat, and the winds surrounding a tropical cyclone , tornado or dust devil . Vortices are a major component of turbulent flow . The distribution of velocity, vorticity (the curl of the flow velocity), as well as the concept of circulation are used to characterise vortices. In most vortices, the fluid flow velocity is greatest next to its axis and decreases in inverse proportion to the distance from the axis. In the absence of external forces, viscous friction within the fluid tends to organise the flow into a collection of irrotational vortices, possibly superimposed to larger-scale flows, including larger-scale vortices. Once formed, vortices can move, stretch, twist, and interact in complex ways. A moving vortex carries some angular and linear momentum, energy, and mass, with it. In the dynamics of fluid, a vortex is fluid that revolves around the line of flow. This flow of fluid might be curved or straight. Vortices form from stirred fluids: they might be observed in smoke rings , whirlpools , in the wake of a boat or the winds around a tornado or dust devil . Vortices are an important part of turbulent flow . Vortices can otherwise be known as a circular motion of a liquid. In the cases of the absence of forces, the liquid settles. This makes the water stay still instead of moving. When they are created, vortices can move, stretch, twist and interact in complicated ways. When a vortex is moving, sometimes, it can affect an angular position. For an example, if a water bucket is rotated or spun constantly, it will rotate around an invisible line called the axis line. The rotation moves around in circles. In this example the rotation of the bucket creates extra force. The reason that the vortices can change shape is the fact that they have open particle paths. This can create a moving vortex. Examples of this fact are the shapes of tornadoes and drain whirlpools . When two or more vortices are close together they can merge to make a vortex. Vortices also hold energy in its rotation of the fluid. If the energy is never removed, it would consist of circular motion forever. A key concept in the dynamics of vortices is the vorticity , a vector that describes the local rotary motion at a point in the fluid, as would be perceived by an observer that moves along with it. Conceptually, the vorticity could be observed by placing a tiny rough ball at the point in question, free to move with the fluid, and observing how it rotates about its center. The direction of the vorticity vector is defined to be the direction of the axis of rotation of this imaginary ball (according to the right-hand rule ) while its length is twice the ball's angular velocity . Mathematically, the vorticity is defined as the curl (or rotational) of the velocity field of the fluid, usually denoted by ω → {\displaystyle {\vec {\omega }}} and expressed by the vector analysis formula ∇ × u → {\displaystyle \nabla \times {\vec {\mathit {u}}}} , where ∇ {\displaystyle \nabla } is the nabla operator and u → {\displaystyle {\vec {\mathit {u}}}} is the local flow velocity. [ 5 ] The local rotation measured by the vorticity ω → {\displaystyle {\vec {\omega }}} must not be confused with the angular velocity vector of that portion of the fluid with respect to the external environment or to any fixed axis. In a vortex, in particular, ω → {\displaystyle {\vec {\omega }}} may be opposite to the mean angular velocity vector of the fluid relative to the vortex's axis. In theory, the speed u of the particles (and, therefore, the vorticity) in a vortex may vary with the distance r from the axis in many ways. There are two important special cases, however: In the absence of external forces, a vortex usually evolves fairly quickly toward the irrotational flow pattern [ citation needed ] , where the flow velocity u is inversely proportional to the distance r . Irrotational vortices are also called free vortices . For an irrotational vortex, the circulation is zero along any closed contour that does not enclose the vortex axis; and has a fixed value, Γ , for any contour that does enclose the axis once. [ 6 ] The tangential component of the particle velocity is then u θ = Γ 2 π r {\displaystyle u_{\theta }={\tfrac {\Gamma }{2\pi r}}} . The angular momentum per unit mass relative to the vortex axis is therefore constant, r u θ = Γ 2 π {\displaystyle ru_{\theta }={\tfrac {\Gamma }{2\pi }}} . The ideal irrotational vortex flow in free space is not physically realizable, since it would imply that the particle speed (and hence the force needed to keep particles in their circular paths) would grow without bound as one approaches the vortex axis. Indeed, in real vortices there is always a core region surrounding the axis where the particle velocity stops increasing and then decreases to zero as r goes to zero. Within that region, the flow is no longer irrotational: the vorticity ω → {\displaystyle {\vec {\omega }}} becomes non-zero, with direction roughly parallel to the vortex axis. The Rankine vortex is a model that assumes a rigid-body rotational flow where r is less than a fixed distance r 0 , and irrotational flow outside that core regions. In a viscous fluid, irrotational flow contains viscous dissipation everywhere, yet there are no net viscous forces, only viscous stresses. [ 7 ] Due to the dissipation, this means that sustaining an irrotational viscous vortex requires continuous input of work at the core (for example, by steadily turning a cylinder at the core). In free space there is no energy input at the core, and thus the compact vorticity held in the core will naturally diffuse outwards, converting the core to a gradually-slowing and gradually-growing rigid-body flow, surrounded by the original irrotational flow. Such a decaying irrotational vortex has an exact solution of the viscous Navier–Stokes equations , known as a Lamb–Oseen vortex . A rotational vortex – a vortex that rotates in the same way as a rigid body – cannot exist indefinitely in that state except through the application of some extra force, that is not generated by the fluid motion itself. It has non-zero vorticity everywhere outside the core. Rotational vortices are also called rigid-body vortices or forced vortices. For example, if a water bucket is spun at constant angular speed w about its vertical axis, the water will eventually rotate in rigid-body fashion. The particles will then move along circles, with velocity u equal to wr . [ 6 ] In that case, the free surface of the water will assume a parabolic shape. In this situation, the rigid rotating enclosure provides an extra force, namely an extra pressure gradient in the water, directed inwards, that prevents transition of the rigid-body flow to the irrotational state. Vortex structures are defined by their vorticity , the local rotation rate of fluid particles. They can be formed via the phenomenon known as boundary layer separation which can occur when a fluid moves over a surface and experiences a rapid acceleration from the fluid velocity to zero due to the no-slip condition . This rapid negative acceleration creates a boundary layer which causes a local rotation of fluid at the wall (i.e. vorticity ) which is referred to as the wall shear rate. The thickness of this boundary layer is proportional to √ ( v t ) {\displaystyle \surd (vt)} (where v is the free stream fluid velocity and t is time). If the diameter or thickness of the vessel or fluid is less than the boundary layer thickness then the boundary layer will not separate and vortices will not form. However, when the boundary layer does grow beyond this critical boundary layer thickness then separation will occur which will generate vortices. This boundary layer separation can also occur in the presence of combatting pressure gradients (i.e. a pressure that develops downstream). This is present in curved surfaces and general geometry changes like a convex surface. A unique example of severe geometric changes is at the trailing edge of a bluff body where the fluid flow deceleration, and therefore boundary layer and vortex formation, is located. Another form of vortex formation on a boundary is when fluid flows perpendicularly into a wall and creates a splash effect. The velocity streamlines are immediately deflected and decelerated so that the boundary layer separates and forms a toroidal vortex ring. [ 8 ] In a stationary vortex, the typical streamline (a line that is everywhere tangent to the flow velocity vector) is a closed loop surrounding the axis; and each vortex line (a line that is everywhere tangent to the vorticity vector) is roughly parallel to the axis. A surface that is everywhere tangent to both flow velocity and vorticity is called a vortex tube . In general, vortex tubes are nested around the axis of rotation. The axis itself is one of the vortex lines, a limiting case of a vortex tube with zero diameter. According to Helmholtz's theorems , a vortex line cannot start or end in the fluid – except momentarily, in non-steady flow, while the vortex is forming or dissipating. In general, vortex lines (in particular, the axis line) are either closed loops or end at the boundary of the fluid. A whirlpool is an example of the latter, namely a vortex in a body of water whose axis ends at the free surface. A vortex tube whose vortex lines are all closed will be a closed torus -like surface. A newly created vortex will promptly extend and bend so as to eliminate any open-ended vortex lines. For example, when an airplane engine is started, a vortex usually forms ahead of each propeller , or the turbofan of each jet engine . One end of the vortex line is attached to the engine, while the other end usually stretches out and bends until it reaches the ground. When vortices are made visible by smoke or ink trails, they may seem to have spiral pathlines or streamlines. However, this appearance is often an illusion and the fluid particles are moving in closed paths. The spiral streaks that are taken to be streamlines are in fact clouds of the marker fluid that originally spanned several vortex tubes and were stretched into spiral shapes by the non-uniform flow velocity distribution. The fluid motion in a vortex creates a dynamic pressure (in addition to any hydrostatic pressure) that is lowest in the core region, closest to the axis, and increases as one moves away from it, in accordance with Bernoulli's principle . One can say that it is the gradient of this pressure that forces the fluid to follow a curved path around the axis. In a rigid-body vortex flow of a fluid with constant density , the dynamic pressure is proportional to the square of the distance r from the axis. In a constant gravity field, the free surface of the liquid, if present, is a concave paraboloid . In an irrotational vortex flow with constant fluid density and cylindrical symmetry, the dynamic pressure varies as P ∞ − ⁠ K / r 2 ⁠ , where P ∞ is the limiting pressure infinitely far from the axis. This formula provides another constraint for the extent of the core, since the pressure cannot be negative. The free surface (if present) dips sharply near the axis line, with depth inversely proportional to r 2 . The shape formed by the free surface is called a hyperboloid , or " Gabriel's Horn " (by Evangelista Torricelli ). The core of a vortex in air is sometimes visible because water vapor condenses as the low pressure of the core causes adiabatic cooling ; the funnel of a tornado is an example. When a vortex line ends at a boundary surface, the reduced pressure may also draw matter from that surface into the core. For example, a dust devil is a column of dust picked up by the core of an air vortex attached to the ground. A vortex that ends at the free surface of a body of water (like the whirlpool that often forms over a bathtub drain) may draw a column of air down the core. The forward vortex extending from a jet engine of a parked airplane can suck water and small stones into the core and then into the engine. Vortices need not be steady-state features; they can move and change shape. In a moving vortex, the particle paths are not closed, but are open, loopy curves like helices and cycloids . A vortex flow might also be combined with a radial or axial flow pattern. In that case the streamlines and pathlines are not closed curves but spirals or helices, respectively. This is the case in tornadoes and in drain whirlpools. A vortex with helical streamlines is said to be solenoidal . As long as the effects of viscosity and diffusion are negligible, the fluid in a moving vortex is carried along with it. In particular, the fluid in the core (and matter trapped by it) tends to remain in the core as the vortex moves about. This is a consequence of Helmholtz's second theorem . Thus vortices (unlike surface waves and pressure waves ) can transport mass, energy and momentum over considerable distances compared to their size, with surprisingly little dispersion. This effect is demonstrated by smoke rings and exploited in vortex ring toys and guns . Two or more vortices that are approximately parallel and circulating in the same direction will attract and eventually merge to form a single vortex, whose circulation will equal the sum of the circulations of the constituent vortices. For example, an airplane wing that is developing lift will create a sheet of small vortices at its trailing edge. These small vortices merge to form a single wingtip vortex , less than one wing chord downstream of that edge. This phenomenon also occurs with other active airfoils , such as propeller blades. On the other hand, two parallel vortices with opposite circulations (such as the two wingtip vortices of an airplane) tend to remain separate. Vortices contain substantial energy in the circular motion of the fluid. In an ideal fluid this energy can never be dissipated and the vortex would persist forever. However, real fluids exhibit viscosity and this dissipates energy very slowly from the core of the vortex. It is only through dissipation of a vortex due to viscosity that a vortex line can end in the fluid, rather than at the boundary of the fluid.
https://en.wikipedia.org/wiki/Vortex
A vortex breaker is a device used in engineering to stop the formation of a vortex when a fluid ( liquid or gas ) is drained from a vessel such as a tank or vapor–liquid separator . The formation of vortices can entrain vapor in the liquid stream, leading to poor separation in process steps such as distillation or excessive pressure drop, [ 1 ] or causing cavitation of downstream pumps . [ 2 ] [ 3 ] Vortices can also re-entrain solid particles previously separated from a gas stream in a solid-gas separation device such as a cyclone . [ 4 ] Many different designs of vortex breaker are available. Some use radial vanes or baffles around the liquid exit to stop some of the angular velocity of the liquid. The "floor grate" design uses a system of grating similar to the metal floor of a catwalk . Different authors give different rules of thumb for vortex breaker design. [ 3 ]
https://en.wikipedia.org/wiki/Vortex_breaker
A vortex generator ( VG ) is an aerodynamic device, consisting of a small vane usually attached to a lifting surface (or airfoil , such as an aircraft wing ) [ 1 ] or a rotor blade of a wind turbine . [ 2 ] VGs may also be attached to some part of an aerodynamic vehicle such as an aircraft fuselage or a car. When the airfoil or the body is in motion relative to the air, the VG creates a vortex , [ 1 ] [ 3 ] which, by removing some part of the slow-moving boundary layer in contact with the airfoil surface, delays local flow separation and aerodynamic stalling , thereby improving the effectiveness of wings and control surfaces , such as flaps , elevators , ailerons , and rudders . [ 3 ] Vortex generators are most often used to delay flow separation . To accomplish this they are often placed on the external surfaces of vehicles [ 4 ] and wind turbine blades. On both aircraft and wind turbine blades they are usually installed quite close to the leading edge of the aerofoil in order to maintain steady airflow over the control surfaces at the trailing edge. [ 3 ] VGs are typically rectangular or triangular, about as tall as the local boundary layer , and run in spanwise lines usually near the thickest part of the wing. [ 1 ] They can be seen on the wings and vertical tails of many airliners . Vortex generators are positioned obliquely so that they have an angle of attack with respect to the local airflow [ 1 ] in order to create a tip vortex which draws energetic, rapidly moving outside air into the slow-moving boundary layer in contact with the surface. A turbulent boundary layer is less likely to separate than a laminar one, and is therefore desirable to ensure effectiveness of trailing-edge control surfaces. Vortex generators are used to trigger this transition. Other devices such as vortilons , leading-edge extensions , and leading-edge cuffs , [ 5 ] also delay flow separation at high angles of attack by re-energizing the boundary layer. [ 1 ] [ 3 ] Examples of aircraft which use VGs include the ST Aerospace A-4SU Super Skyhawk and Symphony SA-160 . For swept-wing transonic designs, VGs alleviate potential shock-stall problems (e.g., Harrier , Blackburn Buccaneer , Gloster Javelin ). Many aircraft carry vane vortex generators from time of manufacture, but there are also aftermarket suppliers who sell VG kits to improve the STOL performance of some light aircraft. [ 6 ] Aftermarket suppliers claim (i) that VGs lower stall speed and reduce take-off and landing speeds, and (ii) that VGs increase the effectiveness of ailerons, elevators and rudders, thereby improving controllability and safety at low speeds. [ 7 ] For home-built and experimental kitplanes , VGs are cheap, cost-effective and can be installed quickly; but for certified aircraft installations, certification costs can be high, making the modification a relatively expensive process. [ 6 ] [ 8 ] Owners fit aftermarket VGs primarily to gain benefits at low speeds, but a downside is that such VGs may reduce cruise speed slightly. In tests performed on a Cessna 182 and a Piper PA-28-235 Cherokee , independent reviewers have documented a loss of cruise speed of 1.5 to 2.0 kn (2.8 to 3.7 km/h). However, these losses are relatively minor, since an aircraft wing at high speed has a small angle of attack, thereby reducing VG drag to a minimum. [ 8 ] [ 9 ] [ 10 ] Owners have reported that on the ground, it can be harder to clear snow and ice from wing surfaces with VGs than from a smooth wing, but VGs are not generally prone to inflight icing as they reside within the boundary layer of airflow. VGs may also have sharp edges which can tear the fabric of airframe covers and may thus require special covers to be made. [ 8 ] [ 9 ] [ 10 ] For twin-engined aircraft, manufacturers claim that VGs reduce single-engine control speed ( Vmca ), increase zero fuel and gross weight, improve the effectiveness of ailerons and rudder, provide a smoother ride in turbulence and make the aircraft a more stable instrument platform. [ 6 ] Some VG kits available for light twin-engine airplanes may allow an increase in maximum takeoff weight . [ 6 ] The maximum takeoff weight of a twin-engine airplane is determined by structural requirements and single-engine climb performance requirements (which are lower for a lower stall speed). For many light twin-engine airplanes, the single-engine climb performance requirements determine a lower maximum weight rather than the structural requirements. Consequently, anything that can be done to improve the single-engine-inoperative climb performance will bring about an increase in maximum takeoff weight. [ 8 ] In the US from 1945 [ 11 ] until 1991, [ 12 ] the one-engine-inoperative climb requirement for multi-engine airplanes with a maximum takeoff weight of 6,000 lb (2,700 kg) or less was as follows: All multi-engine airplanes having a stalling speed V s 0 {\displaystyle V_{s0}} greater than 70 miles per hour shall have a steady rate of climb of at least 0.02 ( V s 0 ) 2 {\displaystyle 0.02(V_{s0})^{2}} in feet per minute at an altitude of 5,000 feet with the critical engine inoperative and the remaining engines operating at not more than maximum continuous power, the inoperative propeller in the minimum drag position, landing gear retracted, wing flaps in the most favorable position … where V s 0 {\displaystyle V_{s0}} is the stalling speed in the landing configuration in miles per hour. Installation of vortex generators can usually bring about a slight reduction in stalling speed of an airplane [ 4 ] and therefore reduce the required one-engine-inoperative climb performance. The reduced requirement for climb performance allows an increase in maximum takeoff weight, at least up to the maximum weight allowed by structural requirements. [ 8 ] An increase in maximum weight allowed by structural requirements can usually be achieved by specifying a maximum zero fuel weight or, if a maximum zero fuel weight is already specified as one of the airplane's limitations, by specifying a new higher maximum zero fuel weight. [ 8 ] For these reasons, vortex generator kits for many light twin-engine airplanes are accompanied by a reduction in maximum zero fuel weight and an increase in maximum takeoff weight. [ 8 ] The one-engine-inoperative rate-of-climb requirement does not apply to single-engine airplanes, so gains in the maximum takeoff weight (based on stall speed or structural considerations) are less significant compared to those for 1945–1991 twins. After 1991, the airworthiness certification requirements in the USA specify the one-engine-inoperative climb requirement as a gradient independent of stalling speed, so there is less opportunity for vortex generators to increase the maximum takeoff weight of multi-engine airplanes whose certification basis is FAR 23 at amendment 23-42 or later. [ 12 ] Because the landing weights of most light aircraft are determined by structural considerations and not by stall speed, most VG kits increase only the takeoff weight and not the landing weight. Any increase in landing weight would require either structural modifications or re-testing the aircraft at the higher landing weight to demonstrate that the certification requirements are still met. [ 8 ] However, after a lengthy flight, sufficient fuel may have been used, thereby bringing the aircraft back below the permitted maximum landing weight. Vortex generators have been used on the wing underside of Airbus A320 family aircraft to reduce noise generated by airflow over circular pressure equalisation vents for the fuel tanks. Lufthansa claims a noise reduction of up to 2 dB can thus be achieved. [ 13 ]
https://en.wikipedia.org/wiki/Vortex_generator
The Vortex lattice method , (VLM), is a numerical method used in computational fluid dynamics , mainly in the early stages of aircraft design and in aerodynamic education at university level. The VLM models the lifting surfaces, such as a wing , of an aircraft as an infinitely thin sheet of discrete vortices to compute lift and induced drag . The influence of the thickness and viscosity is neglected. VLMs can compute the flow around a wing with rudimentary geometrical definition. For a rectangular wing it is enough to know the span and chord. On the other side of the spectrum, they can describe the flow around a fairly complex aircraft geometry (with multiple lifting surfaces with taper, kinks, twist, camber, trailing edge control surfaces and many other geometric features). By simulating the flow field, one can extract the pressure distribution or as in the case of the VLM, the force distribution, around the simulated body. This knowledge is then used to compute the aerodynamic coefficients and their derivatives that are important for assessing the aircraft's handling qualities in the conceptual design phase. With an initial estimate of the pressure distribution on the wing, the structural designers can start designing the load-bearing parts of the wings, fin and tailplane and other lifting surfaces. Additionally, while the VLM cannot compute the viscous drag, the induced drag stemming from the production of lift can be estimated. Hence as the drag must be balanced with the thrust in the cruise configuration, the propulsion group can also get important data from the VLM simulation. John DeYoung provides a background history of the VLM in the NASA Langley workshop documentation SP-405. [ 1 ] The VLM is the extension of Prandtl's lifting-line theory , [ 2 ] where the wing of an aircraft is modeled as an infinite number of Horseshoe vortices . The name was coined by V.M. Falkner in his Aeronautical Research Council paper of 1946. [ 3 ] The method has since then been developed and refined further by W.P. Jones, H. Schlichting, G.N. Ward and others. Although the computations needed can be carried out by hand, the VLM benefited from the advent of computers for the large amounts of computations that are required. Instead of only one horseshoe vortex per wing, as in the Lifting-line theory , the VLM utilizes a lattice of horseshoe vortices, as described by Falkner in his first paper on this subject in 1943. [ 4 ] The number of vortices used vary with the required pressure distribution resolution, and with required accuracy in the computed aerodynamic coefficients. A typical number of vortices would be around 100 for an entire aircraft wing; an Aeronautical Research Council report by Falkner published in 1949 mentions the use of an "84-vortex lattice before the standardisation of the 126-lattice" (p. 4). [ 5 ] The method is comprehensibly described in all major aerodynamic textbooks, such as Katz & Plotkin, [ 6 ] Anderson, [ 7 ] Bertin & Smith [ 8 ] Houghton & Carpenter [ 9 ] or Drela, [ 10 ] The vortex lattice method is built on the theory of ideal flow, also known as Potential flow . Ideal flow is a simplification of the real flow experienced in nature, however for many engineering applications this simplified representation has all of the properties that are important from the engineering point of view. This method neglects all viscous effects. Turbulence, dissipation and boundary layers are not resolved at all. However, lift induced drag can be assessed and, taking special care, some stall phenomena can be modelled. The following assumptions are made regarding the problem in the vortex lattice method: By the above assumptions the flowfield is Conservative vector field , which means that there exists a perturbation velocity potential φ {\displaystyle \varphi } such that the total velocity vector V {\displaystyle \mathbf {V} } is given by V = V ∞ + ∇ φ {\displaystyle \mathbf {V} =\mathbf {V} _{\infty }+\nabla \varphi } and that φ {\displaystyle \varphi } satisfies Laplace's equation . Laplace's equation is a second order linear equation, and being so it is subject to the principle of superposition. Which means that if φ 1 {\displaystyle \varphi _{1}} and φ 2 {\displaystyle \varphi _{2}} are two solutions of the linear differential equation, then the linear combination c 1 φ 1 + c 2 φ 2 {\displaystyle c_{1}\varphi _{1}+c_{2}\varphi _{2}} is also a solution for any values of the constants c 1 {\displaystyle c_{1}} and c 2 {\displaystyle c_{2}} . As Anderson [ 7 ] put it "A complicated flow pattern for an irrotational, incompressible flow can be synthesized by adding together a number of elementary flows, which are also irrotational and incompressible.”. Such elementary flows are the point source or sink, the doublet and the vortex line , each being a solution of Laplace's equation. These may be superposed in many ways to create the formation of line sources, vortex sheets and so on. In the Vortex Lattice method, each such elementary flow is the velocity field of a horseshoe vortex with some strength Γ {\displaystyle \Gamma } . All the lifting surfaces of an aircraft are divided into some number of quadrilateral panels, and a horseshoe vortex and a collocation point (or control point) are placed on each panel. The transverse segment of the vortex is at the 1/4 chord position of the panel, while the collocation point is at the 3/4 chord position. The vortex strength Γ {\displaystyle \Gamma } is to be determined. A normal vector n {\displaystyle \mathbf {n} } is also placed at each collocation point, set normal to the camber surface of the actual lifting surface. For a problem with N {\displaystyle N} panels, the perturbation velocity at collocation point i {\displaystyle i} is given by summing the contributions of all the horseshoe vortices in terms of an Aerodynamic Influence Coefficient (AIC) matrix w i j {\displaystyle \mathbf {w} _{ij}} . ∇ φ i = ∑ j = 1 N w i j Γ j {\displaystyle \nabla \varphi _{i}=\sum _{j=1}^{N}\mathbf {w} _{ij}\Gamma _{j}} The freestream velocity vector is given in terms of the freestream speed V ∞ {\displaystyle V_{\infty }} and the angles of attack and sideslip, α , β {\displaystyle \alpha ,\beta } . V ∞ = V ∞ [ cos ⁡ α cos ⁡ β − sin ⁡ β sin ⁡ α cos ⁡ β ] {\displaystyle \mathbf {V} _{\infty }=V_{\infty }{\begin{bmatrix}\cos \alpha \cos \beta \\-\sin \beta \\\sin \alpha \cos \beta \end{bmatrix}}} A Neumann boundary condition is applied at each collocation point, which prescribes that the normal velocity across the camber surface is zero. Alternate implementations may also use the Dirichlet boundary condition directly on the velocity potential . v i ⋅ n i = ( V ∞ + ∑ j = 1 N w i j Γ j ) ⋅ n i = 0 {\displaystyle \mathbf {v} _{i}\cdot \mathbf {n} _{i}=\left(\mathbf {V} _{\infty }+\sum _{j=1}^{N}\mathbf {w} _{ij}\Gamma _{j}\right)\cdot \mathbf {n} _{i}=0} This is also known as the flow tangency condition. By evaluating the dot products above the following system of equations results. The new normalwash AIC matrix is a i j = w i j ⋅ n i {\displaystyle a_{ij}=\mathbf {w} _{ij}\cdot \mathbf {n} _{i}} , and the right hand side is formed by the freestream speed and the two aerodynamic angles b i = V ∞ [ − cos ⁡ α cos ⁡ β , sin ⁡ β , − sin ⁡ α cos ⁡ β ] ⋅ n i {\displaystyle b_{i}=V_{\infty }[-\cos \alpha \cos \beta ,\sin \beta ,-\sin \alpha \cos \beta ]\cdot \mathbf {n} _{i}} [ a 11 a 12 ⋯ a 1 N a 21 ⋱ ⋮ ⋮ ⋱ ⋮ a N 1 ⋯ ⋯ a N N ] [ Γ 1 Γ 2 ⋮ Γ N ] = [ b 1 b 2 ⋮ b N ] {\displaystyle {\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1N}\\a_{21}&\ddots &&\vdots \\\vdots &&\ddots &\vdots \\a_{N1}&\cdots &\cdots &a_{NN}\end{bmatrix}}{\begin{bmatrix}\Gamma _{1}\\\Gamma _{2}\\\vdots \\\Gamma _{N}\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{N}\end{bmatrix}}} This system of equations is solved for all the vortex strengths Γ i {\displaystyle \Gamma _{i}} . The total force vector F {\displaystyle \mathbf {F} } and total moment vector M {\displaystyle \mathbf {M} } about the origin are then computed by summing the contributions of all the forces F i {\displaystyle \mathbf {F} _{i}} on all the individual horseshoe vortices, with ρ {\displaystyle \rho } being the fluid density. F i = ρ Γ i ( V ∞ + v i ) × l i {\displaystyle \mathbf {F} _{i}=\rho \Gamma _{i}(\mathbf {V} _{\infty }+\mathbf {v} _{i})\times \mathbf {l} _{i}} F = ∑ i = 1 N F i {\displaystyle \mathbf {F} =\sum _{i=1}^{N}\mathbf {F} _{i}} M = ∑ i = 1 N F i × r i {\displaystyle \mathbf {M} =\sum _{i=1}^{N}\mathbf {F} _{i}\times \mathbf {r} _{i}} Here, l i {\displaystyle \mathbf {l} _{i}} is the vortex's transverse segment vector, and v i {\displaystyle \mathbf {v} _{i}} is the perturbation velocity at this segment's center location r i {\displaystyle \mathbf {r} _{i}} (not at the collocation point). The lift and induced drag are obtained from the x , y , z {\displaystyle x,y,z} components of the total force vector F {\displaystyle \mathbf {F} } . For the case of zero sideslip these are given by D i = F x cos ⁡ α + F z sin ⁡ α L = − F x sin ⁡ α + F z cos ⁡ α {\displaystyle {\begin{array}{rcl}D_{i}&=&\;\;F_{x}\cos \alpha +F_{z}\sin \alpha \\L&=&\!-F_{x}\sin \alpha +F_{z}\cos \alpha \end{array}}} The preliminary design of airplanes requires unsteady aerodynamic models, usually written in the frequency domain for aeroelastic analyses. Commonly used is the Doublet Lattice Method, where the wing system is subdivided into panels. Each panel has a line of doublets of acceleration potential in the first-quarter line, similarly of what is usually done in the Vortex Lattice Method. Each panel has a load point where the lifting force is assumed applied and a control point where the aeroelastic boundary condition is enforced. The Doublet Lattice Method evaluated at frequency zero is usually obtained with a Vortex Lattice formulation
https://en.wikipedia.org/wiki/Vortex_lattice_method
In fluid dynamics , vortex shedding is an oscillating flow that takes place when a fluid such as air or water flows past a bluff (as opposed to streamlined) body at certain velocities, depending on the size and shape of the body. In this flow, vortices are created at the back of the body and detach periodically from either side of the body forming a Kármán vortex street . The fluid flow past the object creates alternating low-pressure vortices on the downstream side of the object. The object will tend to move toward the low-pressure zone. If the bluff structure is not mounted rigidly and the frequency of vortex shedding matches the resonance frequency of the structure, then the structure can begin to resonate , vibrating with harmonic oscillations driven by the energy of the flow. This vibration is the cause for overhead power line wires humming in the wind, [ 1 ] and for the fluttering of automobile whip radio antennas at some speeds. Tall chimneys constructed of thin-walled steel tubes can be sufficiently flexible that, in air flow with a speed in the critical range, vortex shedding can drive the chimney into violent oscillations that can damage or destroy the chimney. Vortex shedding was one of the causes proposed for the failure of the original Tacoma Narrows Bridge (Galloping Gertie) in 1940, but was rejected because the frequency of the vortex shedding did not match that of the bridge. The bridge actually failed by aeroelastic flutter . [ 2 ] A thrill ride, " VertiGo " at Cedar Point in Sandusky, Ohio suffered vortex shedding during the winter of 2001, causing one of the three towers to collapse. The ride was closed for the winter at the time. [ 3 ] In northeastern Iran, the Hashemi-Nejad natural gas refinery's flare stacks suffered vortex shedding seven times from 1975 to 2003. Some simulation and analyses were done, which revealed that the main cause was the interaction of the pilot flame and flare stack. The problem was solved by removing the pilot. [ 4 ] The frequency at which vortex shedding takes place for a cylinder is related to the Strouhal number by the following equation: Where S t {\displaystyle \mathrm {St} } is the dimensionless Strouhal number , f {\displaystyle f} is the vortex shedding frequency (Hz), D {\displaystyle D} is the diameter of the cylinder (m), and V {\displaystyle V} is the flow velocity (m/s). The Strouhal number depends on the Reynolds number R e {\displaystyle \mathrm {Re} } , [ 5 ] but a value of 0.22 is commonly used. [ 6 ] As the unit is dimensionless, any set of units can be used for the variables. Over four orders of magnitude in Reynolds number, from 10 2 to 10 5 , the Strouhal number varies only between 0.18 and 0.22. [ 5 ] Fairings can be fitted to a structure to streamline the flow past the structure, such as on an aircraft wing. Tall metal smokestacks or other tubular structures such as antenna masts or tethered cables can be fitted with an external corkscrew fin (a strake ) to deliberately introduce turbulence, so the load is less variable and resonant load frequencies have negligible amplitudes. [ 7 ] The effectiveness of helical strakes for reducing vortex induced vibration was discovered in 1957 by Christopher Scruton and D. E. J. Walshe at the National Physics Laboratory in Great Britain. [ 8 ] They are therefore often described as Scruton strakes. For maximum effectiveness in suppression of vortices caused by air flow, each fin or strake should have a height of about 10 percent of the cylinder diameter. The pitch of each fin should be approximately 5 times the cylinder diameter. [ 9 ] A tuned mass damper can be used to mitigate vortex shedding in stacks and chimneys. A Stockbridge damper is used to mitigate aeolian vibrations caused by vortex shedding on overhead power lines .
https://en.wikipedia.org/wiki/Vortex_shedding
A vortex sheet is a term used in fluid mechanics for a surface across which there is a discontinuity in fluid velocity , such as in slippage of one layer of fluid over another. [ 1 ] While the tangential components of the flow velocity are discontinuous across the vortex sheet, the normal component of the flow velocity is continuous. The discontinuity in the tangential velocity means the flow has infinite vorticity on a vortex sheet. At high Reynolds numbers , vortex sheets tend to be unstable. In particular, they may exhibit Kelvin–Helmholtz instability . The formulation of the vortex sheet equation of motion is given in terms of a complex coordinate Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle z = x + iy} . The sheet is described parametrically by z ( s , t ) {\displaystyle z(s,t)} where s {\displaystyle s} is the arclength between coordinate z {\displaystyle z} and a reference point, and t {\displaystyle t} is time. Let γ ( s , t ) {\displaystyle \gamma (s,t)} denote the strength of the sheet, that is, the jump in the tangential discontinuity. Then the velocity field induced by the sheet is ∂ z ∗ ∂ t = − ı 2 π ∫ − ∞ ∞ γ ( s ′ , t ) d s ′ z ( s , t ) − z ( s ′ , t ) {\displaystyle {\frac {\partial z^{*}}{\partial t}}=-{\frac {\imath }{2\pi }}\int \limits _{-\infty }^{\infty }{\frac {\gamma (s',t)\mathrm {d} s'}{z(s,t)-z(s',t)}}} The integral in the above equation is a Cauchy principal value integral. We now define Γ {\displaystyle \Gamma } as the integrated sheet strength or circulation between a point with arc length s {\displaystyle s} and the reference material point s = 0 {\displaystyle s=0} in the sheet. Γ ( s , t ) = ∫ 0 s γ ( s ′ , t ) d s ′ a n d d Γ d s = γ ( s , t ) {\displaystyle \Gamma (s,t)=\int \limits _{0}^{s}\gamma (s',t)\mathrm {d} s'\qquad \mathrm {and} \qquad {\frac {\mathrm {d} \Gamma }{\mathrm {d} s}}=\gamma (s,t)} As a consequence of Kelvin's circulation theorem, in the absence of external forces on the sheet, the circulation between any two material points in the sheet remains conserved, so d Γ / d t = 0 {\displaystyle \mathrm {d} \Gamma /\mathrm {d} t=0} . The equation of motion of the sheet can be rewritten in terms of Γ {\displaystyle \Gamma } and t {\displaystyle t} by a change of variable. The parameter s {\displaystyle s} is replaced by Γ {\displaystyle \Gamma } . That is, ∂ z ∗ ∂ t = − ı 2 π ∫ − ∞ ∞ d Γ ′ z ( Γ , t ) − z ( Γ ′ , t ) {\displaystyle {\frac {\partial z^{*}}{\partial t}}=-{\frac {\imath }{2\pi }}\int \limits _{-\infty }^{\infty }{\frac {d\Gamma '}{z(\Gamma ,t)-z(\Gamma ',t)}}} This nonlinear integro-differential equation is called the Birkoff-Rott equation. It describes the evolution of the vortex sheet given initial conditions. Greater details on vortex sheets can be found in the textbook by Saffman (1977). Once a vortex sheet, it will diffuse due to viscous action. Consider a planar unidirectional flow at t = 0 {\displaystyle t=0} , implying the presence of a vortex sheet at y = 0 {\displaystyle y=0} . The velocity discontinuity smooths out according to [ 2 ] where ν {\displaystyle \nu } is the kinematic viscosity . The only non-zero vorticity component is in the z {\displaystyle z} direction, given by A flat vortex sheet with periodic boundaries in the streamwise direction can be used to model a temporal free shear layer at high Reynolds number. Let us assume that the interval between the periodic boundaries is of length 1 {\displaystyle 1} . Then the equation of motion of the vortex sheet reduces to ∂ z ∗ ∂ t = − ı 2 ∫ 0 1 cot ⁡ π ( z ( Γ , t ) − z ( Γ ′ , t ) ) d Γ ′ {\displaystyle {\frac {\partial z^{*}}{\partial t}}=-{\frac {\imath }{2}}\int \limits _{0}^{1}\cot \pi (z(\Gamma ,t)-z(\Gamma ',t))\;d\Gamma '} Note that the integral in the above equation is a Cauchy principal value integral. The initial condition for a flat vortex sheet with constant strength is z ( Γ , 0 ) = Γ {\displaystyle z(\Gamma ,0)=\Gamma } . The flat vortex sheet is an equilibrium solution. However, it is unstable to infinitesimal periodic disturbances of the form ∑ k = − ∞ ∞ A k e ı 2 π k Γ {\displaystyle \sum _{k=-\infty }^{\infty }A_{k}\mathrm {e} ^{\imath 2\pi k\Gamma }} . Linear theory shows that the Fourier coefficient A k {\displaystyle A_{k}} grows exponentially at a rate proportional to k {\displaystyle k} . That is, higher the wavenumber of a Fourier mode, the faster it grows. However, a linear theory cannot be extended much beyond the initial state. If nonlinear interactions are taken into account, asymptotic analysis suggests that for large k {\displaystyle k} and finite t < t c {\displaystyle t<t_{c}} , where t c {\displaystyle t_{c}} is a critical value, the Fourier coefficient A k {\displaystyle A_{k}} decays exponentially. The vortex sheet solution is expected to lose analyticity at the critical time. See Moore (1979), and Meiron, Baker and Orszag (1983). The vortex sheet solution as given by the Birkoff-Rott equation cannot go beyond the critical time. The spontaneous loss of analyticity in a vortex sheet is a consequence of mathematical modeling since a real fluid with viscosity, however small, will never develop singularity. Viscosity acts a smoothing or regularization parameter in a real fluid. There have been extensive studies on a vortex sheet, most of them by discrete or point vortex approximation, with or without desingularization. Using a point vortex approximation and delta-regularization Krasny (1986) obtained a smooth roll-up of a vortex sheet into a double branched spiral. Since point vortices are inherently chaotic, a Fourier filter is necessary to control the growth of round-off errors. Continuous approximation of a vortex sheet by vortex panels with arc wise diffusion of circulation density also shows that the sheet rolls-up into a double branched spiral. In many engineering and physical applications the growth of a temporal free shear layer is of interest. The thickness of a free shear layer is usually measured by momentum thickness, which is defined as θ = ∫ y = − ∞ ∞ ( 1 4 − ( ⟨ u ⟩ 2 U ) 2 ) d y {\displaystyle \theta =\int \limits _{y=-\infty }^{\infty }\left({\frac {1}{4}}-\left({\frac {\left\langle u\right\rangle }{2U}}\right)^{2}\right)\mathrm {d} y} where ⟨ u ⟩ = 1 L ∫ 0 L u ( x , y , t ) d x {\displaystyle \left\langle u\right\rangle ={\frac {1}{L}}\int _{0}^{L}\ u(x,y,t)dx} and U {\displaystyle U} is the freestream velocity. Momentum thickness has the dimension of length and the non-dimensional momentum thickness is given by θ N D = θ / L {\displaystyle \theta _{ND}=\theta /L} . Momentum thickness can be used to measure the thickness of a vortex layer.
https://en.wikipedia.org/wiki/Vortex_sheet
In fluid dynamics , vortex stretching is the lengthening of vortices in three-dimensional fluid flow, associated with a corresponding increase of the component of vorticity in the stretching direction—due to the conservation of angular momentum . [ 1 ] Vortex stretching is associated with a particular term in the vorticity equation . For example, vorticity transport in an incompressible inviscid flow is governed by where D/Dt is the material derivative . The source term on the right hand side is the vortex stretching term. It amplifies the vorticity ω → {\displaystyle {\vec {\omega }}} when the velocity is diverging in the direction parallel to ω → {\displaystyle {\vec {\omega }}} . A simple example of vortex stretching in a viscous flow is provided by the Burgers vortex . Vortex stretching is at the core of the description of the turbulence energy cascade from the large scales to the small scales in turbulence . In turbulence, fluid elements are, on average, more lengthened than squeezed. In the end, this results in more vortex stretching than vortex squeezing . For incompressible flow —due to volume conservation of fluid elements—the lengthening implies thinning of the fluid elements in the directions perpendicular to the stretching direction. This reduces the radial length scale of the associated vorticity. Finally, at the small scales of the order of the Kolmogorov microscales , the turbulence kinetic energy is dissipated into heat through the action of molecular viscosity. [ 2 ] [ 3 ] This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Vortex_stretching
The vortex tube , also known as the Ranque-Hilsch vortex tube , is a mechanical device that separates a compressed gas into hot and cold streams. The gas emerging from the hot end can reach temperatures of 200 °C (390 °F), and the gas emerging from the cold end can reach −50 °C (−60 °F). [ 1 ] It has no moving parts and is considered an environmentally friendly technology because it can work solely on compressed air and does not use Freon . [ 2 ] Its efficiency is low, however, counteracting its other environmental advantages. Pressurised gas is injected tangentially into a swirl chamber near one end of a tube, leading to a rapid rotation —the first vortex—as it moves along the inner surface of the tube to the far end. A conical nozzle allows gas specifically from this outer layer to escape at that end through a valve. The remainder of the gas is forced to return in an inner vortex of reduced diameter within the outer vortex. Gas from the inner vortex transfers energy to the gas in the outer vortex, so the outer layer is hotter at the far end than it was initially. The gas in the central vortex is likewise cooler upon its return to the starting-point, where it is released from the tube. To explain the temperature separation in a vortex tube, there are two main approaches: This approach is based on first-principles physics alone and is not limited to vortex tubes only, but applies to moving gas in general. It shows that temperature separation in a moving gas is due only to enthalpy conservation in a moving frame of reference. The thermal process in the vortex tube can be estimated in the following way: The main physical phenomenon of the vortex tube is the temperature separation between the cold vortex core and the warm vortex periphery. The "vortex tube effect" is fully explained with the work equation of Euler, [ 3 ] also known as Euler's turbine equation, which can be written in its most general vectorial form as: [ 4 ] where T {\displaystyle T} is the total, or stagnation temperature of the rotating gas at radial position r → {\displaystyle {\vec {r}}} , the absolute gas velocity as observed from the stationary frame of reference is denoted with v → {\displaystyle {\vec {v}}} ; the angular velocity of the system is ω → {\displaystyle {\vec {\omega }}} and c p {\displaystyle c_{p}} is the isobaric heat capacity of the gas. This equation was published in 2012; it explains the fundamental operating principle of vortex tubes (Here's a video with animated demonstration of how this works [ 5 ] ). The search for this explanation began in 1933 when the vortex tube was discovered and continued for more than 80 years. The above equation is valid for an adiabatic turbine passage; it clearly shows that while gas moving towards the center is getting colder, the peripheral gas in the passage is "getting faster". Therefore, vortex cooling is due to angular propulsion. The more the gas cools by reaching the center, the more rotational energy it delivers to the vortex and thus the vortex rotates even faster. This explanation stems directly from the law of energy conservation. Compressed gas at room temperature is expanded in order to gain speed through a nozzle; it then climbs the centrifugal barrier of rotation during which energy is also lost. The lost energy is delivered to the vortex, which speeds its rotation. In a vortex tube, the cylindrical surrounding wall confines the flow at periphery and thus forces conversion of kinetic into internal energy, which produces hot air at the hot exit. Therefore, the vortex tube is a rotorless turboexpander . [ 6 ] It consists of a rotorless radial inflow turbine (cold end, center) and a rotorless centrifugal compressor (hot end, periphery). The work output of the turbine is converted into heat by the compressor at the hot end. This approach relies on observation and experimental data. It is specifically tailored to the geometrical shape of the vortex tube and the details of its flow and is designed to match the particular observables of the complex vortex tube flow, namely turbulence, acoustic phenomena, pressure fields, air velocities and many others. The earlier published models of the vortex tube are phenomenological. They are: More on these models can be found in recent review articles on vortex tubes. [ 7 ] [ 8 ] The phenomenological models were developed at an earlier time when the turbine equation of Euler was not thoroughly analyzed; in the engineering literature, this equation is studied mostly to show the work output of a turbine; while temperature analysis is not performed since turbine cooling has more limited application unlike power generation, which is the main application of turbines. Phenomenological studies of the vortex tube in the past have been useful in presenting empirical data. However, due to the complexity of the vortex flow this empirical approach was able to show only aspects of the effect but was unable to explain its operating principle. Dedicated to empirical details, for a long time the empirical studies made the vortex tube effect appear enigmatic and its explanation – a matter of debate. The vortex tube was invented in 1931 by French physicist Georges J. Ranque . [ 9 ] It was rediscovered by Paul Dirac in 1934 while he was searching for a device to perform isotope separation , leading to development of the Helikon vortex separation process . [ 10 ] German physicist Rudolf Hilsch [ de ] improved the design and published a widely read paper in 1947 on the device, which he called a Wirbelrohr (literally, whirl pipe). [ 11 ] In 1954, Westley [ 12 ] published a comprehensive survey entitled "A bibliography and survey of the vortex tube", which included over 100 references. In 1951 Curley and McGree, [ 13 ] in 1956 Kalvinskas, [ 14 ] in 1964 Dobratz, [ 15 ] in 1972 Nash, [ 16 ] and in 1979 Hellyar [ 17 ] made important contribution to the RHVT literature by their extensive reviews on the vortex tube and its applications. From 1952 to 1963, C. Darby Fulton, Jr. obtained four U.S. patents relating to the development of the vortex tube. [ 18 ] In 1961, Fulton began manufacturing the vortex tube under the company name Fulton Cryogenics. [ 19 ] Fulton sold the company to Vortec, Inc. [ 19 ] The vortex tube was used to separate gas mixtures, oxygen and nitrogen, carbon dioxide and helium, carbon dioxide and air in 1967 by Linderstrom-Lang. [ 20 ] [ 21 ] Vortex tubes also seem to work with liquids to some extent, as demonstrated by Hsueh and Swenson in a laboratory experiment where free body rotation occurs from the core and a thick boundary layer at the wall. Air is separated causing a cooler air stream coming out the exhaust hoping to chill as a refrigerator. [ 22 ] In 1988 R. T. Balmer applied liquid water as the working medium. It was found that when the inlet pressure is high, for instance 20-50 bar, the heat energy separation process exists in incompressible (liquids) vortex flow as well. Note that this separation is only due to heating; there is no longer cooling observed since cooling requires compressibility of the working fluid. Vortex tubes have lower efficiency than traditional air conditioning equipment. [ 23 ] They are commonly used for inexpensive spot cooling, when compressed air is available. Commercial vortex tubes are designed for industrial applications to produce a temperature drop of up to 71 °C (160 °F). With no moving parts, no electricity, and no refrigerant, a vortex tube can produce refrigeration up to 1,800 W (6,000 BTU/h) using 100 standard cubic feet per minute (2.832 m 3 /min) of filtered compressed air at 100 psi (6.9 bar). They are the main technology used in cold air guns, enclosure coolers, and cooling vests. [ 24 ] A control valve in the hot air exhaust adjusts temperatures, flows and refrigeration over a wide range. [ 25 ] [ 26 ] Vortex tubes are used for cooling of cutting tools ( lathes and mills , both manually-operated and CNC machines) during machining. The vortex tube is well-matched to this application: machine shops generally already use compressed air, and a fast jet of cold air provides both cooling and removal of the chips produced by the tool. This eliminates or drastically reduces the need for liquid coolant, which is messy, expensive, and environmentally hazardous.
https://en.wikipedia.org/wiki/Vortex_tube
The vorticity equation of fluid dynamics describes the evolution of the vorticity ω of a particle of a fluid as it moves with its flow ; that is, the local rotation of the fluid (in terms of vector calculus this is the curl of the flow velocity ). The governing equation is: D ω D t = ∂ ω ∂ t + ( u ⋅ ∇ ) ω = ( ω ⋅ ∇ ) u − ω ( ∇ ⋅ u ) + 1 ρ 2 ∇ ρ × ∇ p + ∇ × ( ∇ ⋅ τ ρ ) + ∇ × ( B ρ ) {\displaystyle {\begin{aligned}{\frac {D{\boldsymbol {\omega }}}{Dt}}&={\frac {\partial {\boldsymbol {\omega }}}{\partial t}}+(\mathbf {u} \cdot \nabla ){\boldsymbol {\omega }}\\&=({\boldsymbol {\omega }}\cdot \nabla )\mathbf {u} -{\boldsymbol {\omega }}(\nabla \cdot \mathbf {u} )+{\frac {1}{\rho ^{2}}}\nabla \rho \times \nabla p+\nabla \times \left({\frac {\nabla \cdot \tau }{\rho }}\right)+\nabla \times \left({\frac {\mathbf {B} }{\rho }}\right)\end{aligned}}} where ⁠ D / Dt ⁠ is the material derivative operator, u is the flow velocity , ρ is the local fluid density , p is the local pressure , τ is the viscous stress tensor and B represents the sum of the external body forces . The first source term on the right hand side represents vortex stretching . The equation is valid in the absence of any concentrated torques and line forces for a compressible , Newtonian fluid . In the case of incompressible flow (i.e., low Mach number ) and isotropic fluids, with conservative body forces, the equation simplifies to the vorticity transport equation : where ν is the kinematic viscosity and ∇ 2 {\displaystyle \nabla ^{2}} is the Laplace operator . Under the further assumption of two-dimensional flow, the equation simplifies to: Thus for an inviscid, barotropic fluid with conservative body forces, the vorticity equation simplifies to Alternately, in case of incompressible, inviscid fluid with conservative body forces, For a brief review of additional cases and simplifications, see also. [ 2 ] For the vorticity equation in turbulence theory, in context of the flows in oceans and atmosphere, refer to. [ 3 ] The vorticity equation can be derived from the Navier–Stokes equation for the conservation of angular momentum . In the absence of any concentrated torques and line forces, one obtains: Now, vorticity is defined as the curl of the flow velocity vector; taking the curl of momentum equation yields the desired equation. The following identities are useful in derivation of the equation: where ϕ {\displaystyle \phi } is any scalar field. The vorticity equation can be expressed in tensor notation using Einstein's summation convention and the Levi-Civita symbol e ijk : In the atmospheric sciences , the vorticity equation can be stated in terms of the absolute vorticity of air with respect to an inertial frame, or of the vorticity with respect to the rotation of the Earth. The absolute version is Here, η is the polar ( z ) component of the vorticity, ρ is the atmospheric density , u , v , and w are the components of wind velocity , and ∇ h is the 2-dimensional (i.e. horizontal-component-only) del .
https://en.wikipedia.org/wiki/Vorticity_equation
Ocean Voyager is a Turkey-based manufacturer of maritime electronics, best known for producing Voyage Data Recorders (VDR) and Simplified Voyage Data Recorders (S-VDR) that comply with international maritime standards. The company develops integrated solutions that combine advanced hardware with user-friendly software interfaces, focusing on safety, compliance, and operational efficiency in the maritime industry. Ocean Voyager was established to meet the growing demand for secure and standardized data recording in the shipping industry. Since its foundation, the company has focused on the design and manufacturing of VDR systems in compliance with SOLAS (International Convention for the Safety of Life at Sea) and IMO (International Maritime Organization) regulations. Its solutions serve a variety of vessel types, including commercial cargo ships and passenger vessels, across global markets. Ocean Voyager’s product line is designed to accommodate different operational needs and vessel types: All models support up to 48 hours of protected capsule data recording and 30 days of internal system storage in accordance with IMO MSC.333(90) . The systems are capable of recording radar and ECDIS screens, audio from bridge and VHF sources, and a wide range of serial, digital, and analog signals. Key specifications of the DR-100 model include: Ocean Voyager VDR systems meet a wide range of international maritime standards and regulatory requirements, including: These certifications ensure that Ocean Voyager systems are suitable for use on various ship classes, including cargo vessels, high-speed craft, and passenger ships. Ocean Voyager systems are used in multiple aspects of maritime operations, such as: The included “Live Player” software allows for real-time monitoring and historical data playback , enabling users to view synchronized radar images, audio recordings, and sensor data. This supports training, accident analysis, and operational optimization efforts. Ocean Voyager products are used across multiple regions, including Europe, the Middle East, and Asia , and are increasingly adopted by commercial fleets. The systems are suitable for both newbuild installations and retrofit projects , offering flexible solutions for various maritime sectors. Their combination of affordability, high technical standards, and easy integration makes them a notable choice among VDR manufacturers.
https://en.wikipedia.org/wiki/Voyage_data_recorder
The Vroman effect , named after Leo Vroman , describes the process of competitive protein adsorption to a surface by blood serum proteins. The highest mobility proteins generally arrive first and are later replaced by less mobile proteins that have a higher affinity for the surface. The order of protein adsorption also depends on the molecular weight of the species adsorbing. [ 1 ] Typically, low molecular weight proteins are displaced by high molecular weight protein while the opposite, high molecular weight being displaced by low molecular weight, does not occur. A typical example of this occurs when fibrinogen displaces earlier adsorbed proteins on a biopolymer surface and is later replaced by high molecular weight kininogen. [ 2 ] The process is delayed in narrow spaces and on hydrophobic surfaces, fibrinogen is usually not displaced. Under stagnant conditions initial protein deposition takes place in the sequence: albumin ; globulin ; fibrinogen ; fibronectin ; factor XII , and HMWK . [ 3 ] While the exact mechanism of action is still unknown many important protein physical properties play a part in the Vroman Effect. Proteins have many properties that are important to take into consideration when discussing protein adsorption. These properties include the protein size, charge, mobility, stability, and the structure and composition of the different protein domains that make up the protein's tertiary structure . Protein size determines the molecular weight. Protein charge determines whether preferentially or selective favorable interactions will exist between the protein and a biomaterial. Protein mobility plays a factor in adsorption kinetics. The simplest molecular explanation for the exchange of proteins on a surface is the adsorption / desorption model. Here, proteins interact with the surface of a biomaterial and "stick" on the material through interactions made with the protein and the biomaterial surface. Once a protein has adsorbed onto the surface of a biomaterial, the protein may change conformation (structure) and even become nonfunctional. The spaces between the proteins on the biomaterial then become available for new proteins to adsorb. Desorption occurs when the protein leaves the biomaterial surface. This simple model lacks in complexity, since Vroman-like behavior has been observed on hydrophobic surfaces as well as hydrophilic ones. [ 4 ] [ 5 ] Furthermore, adsorption and desorption doesn't completely explain competitive protein exchange on hydrophilic surfaces. [ 6 ] A "transient complex" model was first proposed by Huetz et al. to explain this competitive exchange. [ 6 ] This transient complex exchange occurs in three distinct steps. Initially a protein embeds itself into the monolayer of an already adsorbed homogenous protein monolayer. The aggregation of this new heterogenous protein mixture causes the "turning" of the double-protein complex which exposes the initially adsorbed protein to the solution. In the third step, the protein that was initially adsorbed can now diffuse out into the solution and the new protein takes over. This 3 part "transient complex mechanism" is further explained and verified through AFM imaging by Hirsh et al. [ 7 ] Jung et al. also describe a molecular mechanism for fibrinogen displacement involving pH cycling. [ 8 ] Here the αC domains of fibrinogen change charge after pH cycling which results in conformational changes to the protein that leads to stronger interactions with the protein and the biomaterial. [ 8 ] The simplest mathematical model to explain the Vroman Effect is the Langmuir model using the Langmuir isotherm. [ 9 ] [ 10 ] More complex models include the Fruendlich isotherm and other modifications to the Langmuir model. This model explains the kinetics between reversible adsorption and desorption, assuming the adsorbate behaves as an ideal gas at isothermal conditions. Protein adsorption Langmuir adsorption model
https://en.wikipedia.org/wiki/Vroman_effect
Vtiger is an open-source and cloud-based customer relationship management (CRM) software developed by the Indian software company Vtiger, founded in 2004. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The software provides businesses with tools for sales automation, customer support, marketing automation, project management, inventory/products and billing. [ 5 ] [ 6 ] [ 7 ] Vtiger was launched by AdventNet, the parent company of Zoho. [ 8 ] [ 9 ] Vtiger originated as a fork of SugarCRM. [ 10 ] [ 11 ] In 2010, the company introduced a cloud-based version of Vtiger CRM. [ 12 ] [ 13 ] Vtiger released version 6 of its open source in 2014. [ 14 ] According to Vtiger, version 7 of its open-source CRM was released in 2017.Version 9 of its cloud CRM offering was released in 2020. [ 15 ] [ 16 ] As of April 2025, the open-source version of Vtiger CRM hosted on SourceForge has been downloaded over 5.4 million times, according to its project page. [ 17 ] It also appeared as the most downloaded open-source CRM project in the CRM category on the platform. [ 18 ] According to Vtiger, the company was recognized as a Niche Player in the Gartner Magic Quadrant for Sales Force Automation for six consecutive years, from 2019 to 2024. [ 19 ] This is supported by independent coverage and official Gartner reports confirming the company’s inclusion during that period. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Vtiger
The Vuilleumier cycle was patented by a Swiss-American engineer named Rudolph Vuilleumier in 1918. The purpose of Vuilleumier's machine was to create a heat pump that would use heat at high temperature as energy input. The Vuilleumier cycle... utilize[s] working gas expansion and compression at three variable volume spaces in order to pump heat from a low to a moderate temperature level. The interesting characteristic of the Vuilleumier machine is that the induced volume variations are realized without the use of work, but thermally. This is the reason why it has a potential to operate at modern applications where the pollution of the environment is not a choice. It is a perfect candidate for such applications, as it consists only of metallic parts and inert gas . Using these units for heating and cooling buildings, large energy savings can be accomplished as they can be operated at small scale in common buildings or at large scale providing heat power to entire building blocks without using fossil fuels . The use of Vuilleumier machines for industrial applications or inside vehicles is also a feasible option. Another field where these machines have already been involved is cryogenics , as they are also able to provide refrigeration at very low temperatures like the very similar and well-known Stirling refrigerators . [ 1 ] The Vuilleumier cycle is a thermodynamic cycle with applications in low-temperature cooling . In some respects it resembles a Stirling cycle or engine, although it has two " displacers " with a mechanical linkage connecting them as compared to one in the Stirling cycle. The hot displacer is larger than the cold displacer. The coupling maintains the appropriate phase difference . The displacers do no work—they are not pistons. Thus no work is required in an ideal case to operate the cycle. In reality friction and other losses mean that some work is required. Devices operating on this cycle have been able to produce temperatures as low as 15 K using liquid nitrogen to pre-cool. Without precooling 77 K was reached with a heat flow of 1 W. The cycle was first patented by Vuilleumier in 1918 with patent US1275507, [ 2 ] and again in Leiden by KW Taconis in 1951. In March 2014, the Vuilleumier Cycle was tested in application with updating conventional HVAC (heating, ventilation, and air-conditioning) systems by utilizing the cycle's proposed thermodynamic process of moving heat energy, and having results of increased output efficiencies coupled with a reduced carbon footprint . [ 3 ] This work was completed by ThermoLift), a company based out of the Advanced Energy Research and Technology Center at Stony Brook University , with collaboration from the US Department of Energy and the New York State Energy Research and Development Authority (NYSERDA). [ 4 ] This work culminated in the demonstration of the ThermoLift system at Oak Ridge National Laboratory in August, 2018. The demonstration showed that the ThermoLift technology (TCHP), is able to achieve coefficients of performance (COP) for the cycle that well exceeded the DOE’s target COPs for cold-climate heat pumps (although not at all exceeding Geothermal heat pump efficiencies). Furthermore, due to the nature of the TCHP, there is no significant capacity decrease as the inlet temperature to the cold HX decreases. [ 5 ]
https://en.wikipedia.org/wiki/Vuilleumier_cycle
Vukić Mićović (Serbian: Вукић Мићовић; Bare Kraljske, near Andrijevica , Montenegro , 1 January 1896 – Belgrade , Serbia , Yugoslavia , 19 January 1981) was a Serbian chemist , professor and dean of the Faculty of Natural Sciences and Mathematics in Belgrade, rector of the University of Belgrade and academician of SANU . He was born in Kraljske Bare, near Andrijevica, on 1 January 1896, to father Milonja and mother Ružica, nee Novović. He finished primary school in his native village (1903-1907), and three grades of the lower grammar school in Podgorica (1907-1910), where he sat on a bench with Risto Stijović . He continued his education in Belgrade, where he finished the grades from the fourth to the seventh (1910-1914) in the Second Men's Gymnasium . The First World War prevented him from finishing the eighth grade of high school because he joined the military in 1914 as a student sergeant in the Royal Battalion in Montenegro . In June 1916, he was taken prisoner in Hungary , where he spent time in the camp until 23 December 1918. In 1919, he finished the eighth grade of the Second Men's Gymnasium in Belgrade, passed the matriculation exam and enrolled at the Faculty of Philosophy, departments of chemistry and physics . [ 1 ] He was a professor in the Faculty of Natural Sciences and Mathematics in Belgrade. In 1921, he became the first assistant professor of chemistry, as a student, and after graduating in 1922, he became a teaching assistant. He worked on his doctoral dissertation for two years at the Institute of Chemistry of the University of Nancy (1926-1928) as a scholarship holder of the French government, with Professor Vavon. He received his doctorate in July 1928 and stayed in Nancy for another year to complete his research and work. After that, he spent a year in London doing scientific work in the Chemistry Laboratory of the University College London with Nobel Laureate Robert Robertson as a scholarship holder of the Serbian Support Fund. He returned to Belgrade in October 1930. He was elected assistant professor in 1931 and associate professor in 1938. Immediately after the war, only professors Milivoje Lozanić and Vukić Mićović and assistant Sergije Lebedev were at the Department of Chemistry. He became a full professor in 1950. He was the director of the Chemical Institute in the period 1949-1960. He retired in 1966, and from 1945 until his retirement he was the head of the Department of Chemistry at the Faculty of Natural Sciences and Mathematics. He was the dean of the Faculty of Natural Sciences and Mathematics in Belgrade from 1949 to 1952. He was the rector of the University of Belgrade for two school years, 1952-1953 and 1953-1954. Mićović was popular with his many students from all over Yugoslavia. [ 2 ] He became a corresponding member of the Serbian Academy of Sciences and Arts on 30 Januar 1958, and a regular member on 20 December 1961. He was the secretary of the Department of Natural and Mathematical Sciences at SANU in 1963-1965, and the general secretary of SANU from 1965 to 1971. [ 3 ] Mićović published a large number of scientific papers in various fields of organic chemistry : the construction of alicyclic nuclei and synthetic glycerols of the structure of esters of dicarbonate acids , determination of the constitution of quinidine carbonic acids, systematic studies on reductions by means of lithium aluminum hydride , [ 4 ] reactions of aliphatic alcohols with lead tetracitate , studies on the chemical composition of lichens of Serbia. [ 5 ] Mićović contributed to chemical nomenclature and terminology in the Serbian language . He wrote the university textbook "Stereochemistry", Scientific Book, Belgrade, 1948, 565 pages. He translated from the German books by Arnold Frederik Holleman: "Textbook of Organic Chemistry" and "Inorganic Chemistry". With a group of authors, he prepared a "Chemical textbook" for high schools, published in 1968. In addition to Serbian, he spoke English, French, German, Russian and Italian. He was a member of the French Chemical Society (French: Société Chimique de France ) from 1928 to 1941. He was the vice president of the Serbian Chemical Society and a member of the Croatian Chemical Society. He has won several awards and recognitions, including the Seventh of July Award of Serbia in 1965, the Order of Labor with a Red Flag (1964) and the Order of Merit for the People with a Golden Star (1979). He died on 19 January 1981, in Belgrade and was buried in the Belgrade New Cemetery . [ 6 ] His bust, together with the bust of George K. Stefanović, is in front of the entrance to the Great Chemical Amphitheater (WHA) of the Faculty of Chemistry. He was married Magdalena "Lena" Sokić (1910-1993), the daughter of Milovan Sokić, an MP and cafe owner from Ivanjica . They had three children: Ruzica, Ivan and Milutin.
https://en.wikipedia.org/wiki/Vukić_Mićović
Vulcan ( / ˈ v ʌ l k ən / ) [ 2 ] was a proposed planet that some pre-20th century astronomers thought existed in an orbit between Mercury and the Sun . Speculation about, and even purported observations of, intermercurial bodies or planets date back to the beginning of the 17th century. The case for their probable existence was bolstered by the support of the French mathematician Urbain Le Verrier , who had predicted the existence of Neptune using disturbances in the orbit of Uranus . By 1859, he had confirmed unexplained peculiarities in Mercury's orbit and predicted that they had to be the result of the gravitational influence of another unknown nearby planet or series of asteroids . A French amateur astronomer's report that he had observed an object passing in front of the Sun that same year led Le Verrier to announce that the long sought after planet, which he gave the name Vulcan, had been discovered at last. Many searches were conducted for Vulcan over the following decades but, despite several claimed observations, its existence could not be confirmed. The need for the planet as an explanation for Mercury's orbital peculiarities was later rendered unnecessary when Einstein 's 1915 theory of general relativity showed that Mercury's departure from an orbit predicted by Newtonian physics was explained by effects arising from the curvature of spacetime caused by the Sun's mass. [ 3 ] [ 4 ] Celestial bodies interior to the orbit of Mercury had been hypothesized, searched for, and were even claimed to have been observed, for centuries. [ citation needed ] Claims of seeing objects passing in front of the Sun included those made by the German astronomer Christoph Scheiner in 1611 (which turned out to be the discovery of sunspots ), [ 5 ] British lawyer, writer and amateur astronomer Capel Lofft 's observations of 'an opaque body traversing the sun's disc' on 6 January 1818, [ 6 ] and Bavarian physician and astronomer Franz von Paula Gruithuisen 's 26 June 1819 report of seeing "two small spots...on the Sun, round, black and unequal in size". [ 7 ] German astronomer J. W. Pastorff [ de ] reported many observations also claiming to have seen two spots, with the first observation on 23 October 1822 and subsequent observations in 1823, 1834, 1836, and 1837; in 1834 the larger spot was recorded as 3 arcseconds across, and the smaller 1.25 arcseconds. [ 7 ] Proposals that there could be planets orbiting inside Mercury's orbit were put forward by British scientist Thomas Dick in 1838 [ 8 ] : 264 and by French physicist, mathematician, and astronomer Jacques Babinet in 1846 who suggested there may be "incandescent clouds of a planetary kind, circling the Sun" and proposed the name "Vulcan" (after the god Vulcan from Roman mythology ) for a planet close to the Sun. [ 8 ] : 156 As a planet near the Sun would be lost in its glare, several observers mounted systematic searches to try to catch it during " transit ", i.e. when it passes in front of the Sun's disc. German amateur astronomer Heinrich Schwabe searched unsuccessfully on every clear day from 1826 to 1843 and Yale scientist Edward Claudius Herrick conducted observations twice daily starting in 1847, hoping to catch a planet in transit. [ 8 ] : 264 French physician and amateur astronomer Edmond Modeste Lescarbault began searching the Sun's disk in 1853, and more systematically after 1858, with a 3.75 inch (95 mm) refractor in an observatory he set up outside his surgery. [ 8 ] : 146 In 1840, François Arago , the director of the Paris Observatory , suggested to mathematician Urbain Le Verrier that he work on the topic of Mercury 's orbit around the Sun . The goal of the study was to construct a model based on Sir Isaac Newton 's laws of motion and gravitation . By 1843, Le Verrier published his provisional theory regarding Mercury's motion, with a detailed presentation published in 1845, which would be tested during a transit of Mercury across the face of the Sun in 1848. [ 9 ] [ 10 ] Predictions from Le Verrier's theory failed to match the observations. [ 9 ] Despite that, Le Verrier continued his work and, in 1859, published a more thorough study of Mercury's motion. That was based on a series of meridian observations of the planet and 14 transits. The study's rigor meant that any differences between the motion predicted and what was observed would point to the influence of an unknown factor. Indeed, some discrepancies remained. [ 9 ] During Mercury's orbit, its perihelion advances by a small amount, something called perihelion precession . The observed value exceeds the classical mechanics prediction by the small amount of 43 arcseconds per century. [ 11 ] Le Verrier postulated that the excess precession could be explained by the presence of some unidentified object or objects inside the orbit of Mercury. He calculated that it was either another Mercury-sized planet or, since it was unlikely that astronomers were failing to see such a large object, an unknown asteroid belt near the Sun. [ 12 ] The fact that Le Verrier had predicted the existence of the planet Neptune in 1846, [ 13 ] using the same techniques, lent veracity to his claim. [ non-primary source needed ] [ citation needed ] On 22 December 1859, Le Verrier received a letter from Lescarbault, saying that he had seen a transit of the hypothetical planet on March 26 of that year. Le Verrier took the train to the village of Orgères-en-Beauce , some 70 kilometres (43 mi) south-west of Paris , to Lescarbault's home-made observatory. Le Verrier arrived unannounced and proceeded to interrogate the man. [ 14 ] Lescarbault described in detail how, on 26 March 1859, he observed a small black dot on the face of the Sun . [ 15 ] After some time had passed, he realized that it was moving. He thought it looked similar to the transit of Mercury which he had observed in 1845. He estimated the distance it had already traveled, made some measurements of its position and direction of motion and, using an old clock and a pendulum with which he took his patients' pulses, estimated the total duration of the transit (coming up with 1 hour, 17 minutes, and 9 seconds). [ 14 ] Le Verrier was not happy about Lescarbault's crude equipment but was satisfied the physician had seen the transit of a previously unknown planet. On 2 January 1860, he announced the discovery of the new planet with the proposed name from mythology, "Vulcan", [ 16 ] at the meeting of the Académie des Sciences in Paris. Lescarbault, for his part, was awarded the Légion d'honneur and invited to appear before numerous learned societies. [ 17 ] However, not everyone accepted the veracity of Lescarbault's "discovery". An eminent French astronomer, Emmanuel Liais , who was working for the Brazilian government in Rio de Janeiro in 1859, claimed to have been studying the surface of the Sun with a telescope twice as powerful as Lescarbault's, at the very moment that Lescarbault said he observed his mysterious transit. Liais, therefore, was "in a condition to deny, in the most positive manner, the passage of a planet over the sun at the time indicated". [ 18 ] Based on Lescarbault's "transit", Le Verrier computed Vulcan's orbit: it supposedly revolved about the Sun in a nearly circular orbit at a distance of 21 million kilometres (0.14 AU; 13,000,000 mi). The period of revolution was 19 days and 17 hours, and the orbit was inclined to the ecliptic by 12 degrees and 10 minutes (an incredible degree of precision). As seen from the Earth, Vulcan's greatest elongation from the Sun was 8 degrees. [ 14 ] Numerous reports reached Le Verrier from other amateurs who claimed to have seen unexplained transits. Some of these reports referred to observations made many years earlier, and many were not dated, let alone accurately timed. Nevertheless, Le Verrier continued to tinker with Vulcan's orbital parameters as each newly reported sighting reached him. He frequently announced dates of future Vulcan transits. When these failed to materialize, he tinkered with the parameters some more. [ 19 ] Shortly after 08:00 on 29 January 1860, F.A.R. Russell and three other people in London saw an alleged transit of an intra-Mercurial planet. [ 20 ] Many years later, an American observer, Richard Covington, claimed to have seen a well-defined black spot progress across the Sun's disk around 1860 when he was stationed in Washington Territory . [ 21 ] No observations of Vulcan were made in 1861. Then, on the morning of 20 March 1862, between 08:00 and 09:00 Greenwich Time , another amateur astronomer, a Mr. Lummis of Manchester, England, saw a transit. His colleague, whom he alerted, also saw the event. [ 22 ] Based on these two men's reports, two French astronomers, Benjamin Valz and Rodolphe Radau , independently calculated the object's supposed orbital period, with Valz deriving a figure of 17 days and 13 hours and Radau a figure of 19 days and 22 hours. [ 8 ] : 168 On 8 May 1865 another French astronomer, Aristide Coumbary , observed an unexpected transit from Istanbul , Turkey . [ 23 ] Between 1866 and 1878, no reliable observations of the hypothetical planet were made. Then, during the total solar eclipse of July 29, 1878 , two experienced astronomers, Professor James Craig Watson , the director of the Ann Arbor Observatory in Michigan , and Lewis Swift , from Rochester, New York , both claimed to have seen a Vulcan-type planet close to the Sun. Watson, observing from Separation Point, Wyoming , placed the planet about 2.5 degrees south-west of the Sun and estimated its magnitude at 4.5. Swift, observing the eclipse from a location near Denver, Colorado , saw what he took to be an intra-mercurial planet about 3 degrees south-west of the Sun. He estimated its brightness to be the same as that of Theta Cancri , a fifth-magnitude star which was also visible during totality, about six or seven minutes from the "planet". Theta Cancri and the planet were nearly in line with the Sun's centre. [ citation needed ] Watson and Swift had reputations as excellent observers. Watson had already discovered more than twenty asteroids , while Swift had several comets named after him. Both described the colour of their hypothetical intra-mercurial planet as "red". Watson reported that it had a definite disk—unlike stars, which appear in telescopes as mere points of light—and that its phase indicated that it was on the far side of the Sun approaching superior conjunction . [ 24 ] Both Watson and Swift had observed two objects they believed were not known stars, but after Swift corrected an error in his coordinates, none of the coordinates matched each other, nor known stars. The idea that four objects were observed during the eclipse generated controversy in scientific journals and mockery from Watson's rival C. H. F. Peters . Peters noted that the margin of error in the pencil and cardboard recording device Watson had used was large enough to plausibly include a bright known star. A skeptic of the Vulcan hypothesis, Peters dismissed all the observations as mistaking known stars as planets. [ 25 ] : 215–217 Astronomers continued searching for Vulcan during total solar eclipses in 1883, 1887, 1889, 1900, 1901, 1905, and 1908. [ 25 ] : 219 Finally, in 1908, William Wallace Campbell , Director, and Charles Dillon Perrine , Astronomer, of the Lick Observatory , after comprehensive photographic observations at three solar eclipse expeditions in 1901, 1905, and 1908, stated: "In our opinion, the work of the three Crocker Expeditions ... brings the observational side of the intermercurial planet problem—famous for half a century—definitely to a close." [ 26 ] In 1915 Einstein 's theory of relativity , an approach to understanding gravity entirely differently from classical mechanics , removed the need for Le Verrier's hypothetical planet. [ 3 ] It showed that the peculiarities in Mercury's orbit were the results of the curvature of spacetime caused by the mass of the Sun. [ 27 ] This added a predicted 0.1 arc-second advance of Mercury's perihelion each orbital revolution, or 43 arc-seconds per century, exactly the observed amount (without any recourse to the existence of a hypothetical Vulcan). [ 28 ] The new theory modified the predicted orbits of all planets, but the magnitude of the differences from Newtonian theory diminishes rapidly as one gets farther from the Sun. Also, Mercury's fairly eccentric orbit makes it much easier to detect the perihelion shift than is the case for the nearly circular orbits of Venus and Earth . Einstein's theory was empirically verified in the Eddington experiment during the solar eclipse of May 29, 1919 , during which photographs showed the curvature of spacetime was bending starlight around the Sun. Most astronomers quickly accepted that a large planet inside the orbit of Mercury could not exist, given the corrected equation of gravity. [ 25 ] : 220 The International Astronomical Union has reserved the name ' Vulcanoid " for asteroids that may exist inside the orbit of the planet Mercury. So far, however, earth- and space-based telescopes and the NASA Parker Solar Probe have detected no such asteroids. [ 29 ] Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Vulcan_(hypothetical_planet)
Vulcanization (British English: vulcanisation ) is a range of processes for hardening rubbers . [ 1 ] The term originally referred exclusively to the treatment of natural rubber with sulfur , which remains the most common practice. It has also grown to include the hardening of other (synthetic) rubbers via various means. Examples include silicone rubber via room temperature vulcanizing and chloroprene rubber (neoprene) using metal oxides. Vulcanization can be defined as the curing of elastomers , with the terms 'vulcanization' and 'curing' sometimes used interchangeably in this context. It works by forming cross-links between sections of the polymer chain which results in increased rigidity and durability, as well as other changes in the mechanical and electrical properties of the material. [ 2 ] Vulcanization, in common with the curing of other thermosetting polymers , is generally irreversible. The word was suggested by William Brockedon (a friend of Thomas Hancock who attained the British patent for the process) coming from the god Vulcan who was associated with heat and sulfur in volcanoes . [ 3 ] In ancient Mesoamerican cultures, rubber was used to make balls, sandal soles, elastic bands, and waterproof containers. [ 4 ] It was cured using sulfur-rich plant juices, an early form of vulcanization. [ 5 ] In the 1830s, Charles Goodyear worked to devise a process for strengthening rubber tires. Tires of the time would become soft and sticky with heat, accumulating road debris that punctured them. Goodyear tried heating rubber in order to mix other chemicals with it. This seemed to harden and improve the rubber, though this was due to the heating itself and not the chemicals used. Not realizing this, he repeatedly ran into setbacks when his announced hardening formulas did not work consistently. One day in 1839, when trying to mix rubber with sulfur , Goodyear accidentally dropped the mixture in a hot frying pan. To his astonishment, instead of melting further or vaporizing , the rubber remained firm and, as he increased the heat, the rubber became harder. Goodyear worked out a consistent system for this hardening, and by 1844 patented the process and was producing the rubber on an industrial scale. [ citation needed ] On 21 November 1843, a British inventor, Thomas Hancock took out a patent for the vulcanisation of rubber using sulfur, 8 weeks before Charles Goodyear in the US (30 January 1844). Accounts differ as to whether Hancock's patent was informed by inspecting samples of American rubber from Goodyear and whether inspecting such samples could have provided sufficient information to recreate Goodyear's process. There are many uses for vulcanized materials, some examples of which are rubber hoses, shoe soles, toys, erasers, hockey pucks, shock absorbers, conveyor belts, [ 6 ] vibration mounts/dampers, insulation materials, tires, and bowling balls. [ 7 ] Most rubber products are vulcanized as this greatly improves their lifespan, function, and strength. In contrast with thermoplastic processes (the melt-freeze process that characterize the behaviour of most modern polymers), vulcanization, in common with the curing of other thermosetting polymers , is generally irreversible. Five types of curing systems are in common use: The most common vulcanizing methods depend on sulfur. Sulfur, by itself, is a slow vulcanizing agent and does not vulcanize synthetic polyolefins . Accelerated vulcanization is carried out using various compounds that modify the kinetics of crosslinking; [ 8 ] this mixture is often referred to as a cure package. The main polymers subjected to sulfur vulcanization are polyisoprene ( natural rubber ) and styrene-butadiene rubber (SBR), which are used for most street-vehicle tires. The cure package is adjusted specifically for the substrate and the application. The reactive sites—cure sites—are allylic hydrogen atoms. These C-H bonds are adjacent to carbon-carbon double bonds (>C=C<). During vulcanization, some of these C-H bonds are replaced by chains of sulfur atoms that link with a cure site of another polymer chain. These bridges contain between one and several atoms. The number of sulfur atoms in the crosslink strongly influences the physical properties of the final rubber article. Short crosslinks give the rubber better heat resistance. Crosslinks with higher number of sulfur atoms give the rubber good dynamic properties but less heat resistance. Dynamic properties are important for flexing movements of the rubber article, e.g., the movement of a side-wall of a running tire. Without good flexing properties these movements rapidly form cracks, and ultimately will make the rubber article fail. The vulcanization of neoprene or polychloroprene rubber (CR rubber) is carried out using metal oxides (specifically MgO and ZnO , sometimes Pb 3 O 4 ) rather than sulfur compounds which are presently used with many natural and synthetic rubbers . In addition, because of various processing factors (principally scorch, this being the premature cross-linking of rubbers due to the influence of heat), the choice of accelerator is governed by different rules to other diene rubbers. Most conventionally used accelerators are problematic when CR rubbers are cured and the most important accelerant has been found to be ethylene thiourea (ETU), which, although being an excellent and proven accelerator for polychloroprene, has been classified as reprotoxic . From 2010 to 2013, the European rubber industry had a research project titled SafeRubber to develop a safer alternative to the use of ETU. [ 9 ] Room-temperature vulcanizing (RTV) silicone is constructed of reactive oil-based polymers combined with strengthening mineral fillers. There are two types of room-temperature vulcanizing silicone:
https://en.wikipedia.org/wiki/Vulcanization
In conservation biology , susceptibility is the extent to which an organism or ecological community would suffer from a threatening process or factor if exposed, without regard to the likelihood of exposure. [ 1 ] It should not be confused with vulnerability , which takes into account both the effect of exposure and the likelihood of exposure. [ 2 ] For example, a plant species may be highly susceptible to a particular plant disease, meaning that exposed populations invariably become extinct or decline heavily. However, that species may not be vulnerable if it occurs only in areas where exposure to the disease is unlikely, or if it occurs over such a wide distribution that exposure of all populations is unlikely. Conversely, a plant species may show low susceptibility to a disease, yet may be considered vulnerable if the disease is present in every population. This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Vulnerability_and_susceptibility_in_conservation_biology
Vulnerability assessment is a process of defining, identifying and classifying the security holes in information technology systems. An attacker can exploit a vulnerability to violate the security of a system. Some known vulnerabilities are Authentication Vulnerability, Authorization Vulnerability and Input Validation Vulnerability. [ 1 ] Before deploying a system, it first must go through from a series of vulnerability assessments that will ensure that the build system is secure from all the known security risks. When a new vulnerability is discovered, the system administrator can again perform an assessment, discover which modules are vulnerable, and start the patch process. After the fixes are in place, another assessment can be run to verify that the vulnerabilities were actually resolved. This cycle of assess, patch, and re-assess has become the standard method for many organizations to manage their security issues. The primary purpose of the assessment is to find the vulnerabilities in the system, but the assessment report conveys to stakeholders that the system is secured from these vulnerabilities. If an intruder gained access to a network consisting of vulnerable Web servers, it is safe to assume that he gained access to those systems as well. [ 2 ] Because of assessment report, the security administrator will be able to determine how intrusion occurred, identify compromised assets and take appropriate security measures to prevent critical damage to the system. Depending on the system a vulnerability assessment can have many types and level. A host assessment looks for system-level vulnerabilities such as insecure file permissions, application level bugs, backdoor and Trojan horse installations. It requires specialized tools for the operating system and software packages being used, in addition to administrative access to each system that should be tested. Host assessment is often very costly in term of time, and thus is only used in the assessment of critical systems. Tools like COPS and Tiger are popular in host assessment. In a network assessment one assess the network for known vulnerabilities. It locates all systems on a network, determines what network services are in use, and then analyzes those services for potential vulnerabilities. This process does not require any configuration changes on the systems being assessed. Unlike host assessment, network assessment requires little computational cost and effort. Vulnerability assessment and penetration testing are two different testing methods. They are differentiated on the basis of certain specific parameters.
https://en.wikipedia.org/wiki/Vulnerability_assessment_(computing)
An ongoing concern in the area of nuclear safety and security is the possibility that terrorist organizations may attack facilities possessing radioactive material in order to cause widespread radioactive contamination or to construct nuclear weapons . Such facilities may include nuclear power plants , civilian research reactors, uranium enrichment plants, fuel fabrication plants, uranium mines, and military bases where nuclear weapons are stored. The attack threat is of several general types: commando-like ground-based attacks on equipment which if disabled could lead to a reactor core meltdown or widespread dispersal of radioactivity, external attacks such as an aircraft crash into a reactor complex, or cyber attacks. [ 1 ] The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. If terrorist groups could sufficiently damage safety systems to cause a core meltdown at a nuclear power plant, and/or sufficiently damage spent fuel pools, such an attack could lead to widespread radioactive contamination. The Federation of American Scientists have said that if nuclear power use is to expand significantly, nuclear facilities will have to be made extremely safe from such attacks. New reactor designs have features of passive nuclear safety , which may help. In the United States, the Nuclear Regulatory Commission carries out "Force on Force" exercises at all nuclear power plant sites at least once every three years. [ 1 ] Nuclear power plants become preferred targets during military conflict and, over the past three decades, have been repeatedly attacked during military air strikes, occupations, and invasions. [ 2 ] Various acts of civil disobedience since 1980 by the peace group Plowshares have demonstrated extraordinary breaches of security at nuclear weapons plants in the United States. The National Nuclear Security Administration has acknowledged the seriousness of the 2012 Plowshares action. Non-proliferation policy experts have questioned "the use of private contractors to provide security at facilities that manufacture and store the government's most dangerous military material". [ 3 ] Nuclear weapons materials on the black market are a global concern, [ 4 ] [ 5 ] and there is concern about the possible detonation of a dirty bomb by a militant group in a major city. [ 6 ] [ 7 ] The number and sophistication of cyber attacks is on the rise. Stuxnet is a computer worm discovered in June 2010 that is believed to have been created by the United States and Israel to attack Iran's uranium enrichment facilities. It caused major damage to the facility by operating the centrifuges in erratic and unintended ways. [ 8 ] The computers of South Korea's nuclear plant operator ( KHNP ) were hacked in December 2014. The cyber attacks involved thousands of phishing emails containing malicious code, and information was stolen. [ 9 ] Neither of these attacks directly involved nuclear reactors or their facilities. Nuclear reactors become preferred targets during military conflict and, over the past three decades, have been repeatedly attacked during military air strikes, occupations, invasions and campaigns: [ 2 ] Risks of nuclear energy systems aren't limited to deliberate bombing/shelling of or near nuclear energy plants – nuclear energy systems within war-zones in general have various additional vulnerabilities. Deliberate or unintentional bombing/shelling of or near radioactive waste-sites [ 15 ] is a further concern. These risks have become clearer during the 2022 Russian invasion of Ukraine . For example, when Russian forces occupied the inactive nuclear plant at Chernobyl , it still required "a crew of workers to maintain and monitor it to prevent any further nuclear incidents" and before occupation, fatigue of workers, which may not be allowed to freely come and go, may make mistakes more likely. [ 16 ] [ 17 ] [ 18 ] The EU Commission’s research center (JRC) investigated in spring 2021 in a report and concluded that the terrorist risk of nuclear power plants is vanishingly small, and that even successful terrorism will have relatively insignificant consequences. JRC, finds that hydropower/dams and oil and gas infrastructure pose a significantly greater terrorist risk, although this is still an extremely unlikely hypothetical scenario [ 19 ] American physicist and nuclear energy critic Amory Lovins , in his 1982 book Brittle Power , argued that the energy generation and distribution system of the United States is "brittle" (easily shattered by accident or malice) and that this poses a grave and growing threat to national security, life, and liberty. [ 20 ] Lovins claims that these vulnerabilities are increasingly being exploited. His book documents many significant assaults on energy facilities, other than during a war, in 40 countries and, within the United States, in some 24 states. [ 21 ] Following 9/11, he re-released this book. Lovins further claims that in 1966, 20 natural uranium fuel rods were stolen from the Bradwell nuclear power station in England, and in 1971, five more were stolen at the Wylfa Nuclear Power Station . In 1971, an intruder wounded a night watchman at the Vermont Yankee reactor in the US. The New York University reactor building was broken into in 1972, as was the Oconee Nuclear Station 's fuel storage building in 1973. In 1975, the Kerr McGee plutonium plant had thousands of dollars worth of platinum stolen and taken home by workers. In 1975, at the Biblis Nuclear Power Plant in Germany, a Member of Parliament demonstrated the lack of security by carrying a bazooka into the plant under his coat. [ 22 ] Nuclear plants were designed to withstand earthquakes, hurricanes, and other extreme natural events. But deliberate attacks involving large airliners loaded with fuel, such as those that crashed into the World Trade Center and the Pentagon , were not considered when design requirements for today's fleet of reactors were determined. It was in 1972 when three hijackers took control of a domestic passenger flight along the east coast of the U.S. and threatened to crash the plane into a U.S. nuclear weapons plant in Oak Ridge, Tennessee. The plane got as close as 8,000 feet above the site before the hijackers' demands were met. [ 23 ] [ 24 ] In February 1993, a man drove his car past a checkpoint at the Three Mile Island Nuclear plant, then broke through an entry gate. He eventually crashed the car through a secure door and entered the Unit 1 reactor turbine building. The intruder, who had a history of mental illness, hid in a building and was not apprehended for four hours. Stephanie Cooke asks: "What if he'd been a terrorist armed with a ticking bomb?" [ 25 ] Fissile material may be stolen from nuclear plants and this may promote the spread of nuclear weapons. Many terrorist groups are eager to acquire the fissile material needed to make a crude nuclear device, or a dirty bomb . Nuclear weapons materials on the black market are a global concern, [ 4 ] [ 5 ] and there is concern about the possible detonation of a small, crude nuclear weapon by a militant group in a major city, with significant loss of life and property. [ 6 ] [ 7 ] It is feared that a terrorist group could detonate a radiological or "dirty bomb", composed of any radioactive source and a conventional explosive. The radioactive material is dispersed by the detonation of the explosive. Detonation of such a weapon is not as powerful as a nuclear blast, but can produce considerable radioactive fallout . Alternatively, a terrorist group may position some of its members, or sympathisers, within the plant to sabotage it from inside. [ 26 ] The IAEA Incident and Trafficking Database (ITDB) notes 1,266 incidents reported by 99 countries over the last 12 years, including 18 incidents involving HEU or plutonium trafficking: [ 27 ] Terrorists could target nuclear power plants in an attempt to release radioactive contamination into the community. The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. If terrorist groups could sufficiently damage safety systems to cause a core meltdown at a nuclear power plant, and/or sufficiently damage spent fuel pools, such an attack could lead to a widespread radioactive contamination. According to a 2004 report by the U.S. Congressional Budget Office , "The human, environmental, and economic costs from a successful attack on a nuclear power plant that results in the release of substantial quantities of radioactive material to the environment could be great." [ 37 ] An attack on a reactor's spent fuel pool could also be serious, as these pools are less protected than the reactor core. The release of radioactivity could lead to thousands of near-term deaths and greater numbers of long-term fatalities. [ 1 ] If nuclear power use is to expand significantly, nuclear facilities will have to be made extremely safe from attacks that could release massive quantities of radioactivity into the community. New reactor designs have features of passive safety , such as the flooding of the reactor core without active intervention by reactor operators. But these safety measures have generally been developed and studied with respect to accidents, not to the deliberate reactor attack by a terrorist group. However, the US Nuclear Regulatory Commission does now also require new reactor license applications to consider security during the design stage. [ 1 ] In the United States, the NRC carries out "Force on Force" (FOF) exercises at all nuclear power plant (NPP) sites at least once every three years. The FOF exercise, which is typically conducted over 3 weeks, "includes both tabletop drills and exercises that simulate combat between a mock adversary force and the licensee’s security force. At an NPP, the adversary force attempts to reach and simulate damage to key safety systems and components, defined as "target sets" that protect the reactor's core or the spent fuel pool, which could potentially cause a radioactive release to the environment. The licensee's security force, in turn, interposes itself to prevent the adversaries from reaching target sets and thus causing such a release". [ 1 ] In the U.S., plants are surrounded by a double row of tall fences which are electronically monitored. The plant grounds are patrolled by a sizeable force of armed guards. [ 38 ] In 2009, a paper published in the United States Military Academy 's journal alleged that Pakistan 's nuclear sites had been attacked by al-Qaeda and the Taliban at least three times. [ 29 ] However, the then Director General ISPR Athar Abbas said the claims were "factually incorrect", adding that the sites were "military facilities, not nuclear installations". [ 30 ] [ 31 ] In January 2010, it was revealed that the US military was training a specialised unit "to seal off and snatch back" Pakistani nuclear weapons in the event that militants would obtain a nuclear device or materials that could make one. Pakistan supposedly possesses about 160 nuclear warheads. US officials refused to speak on the record about the American safety plans. [ 39 ] Insider sabotage regularly occurs, because insiders can observe and work around security measures. In a study of insider crimes, the authors repeatedly said that successful insider crimes depended on the perpetrators' observation and knowledge of security vulnerabilities. Since the atomic age began, the U.S. Department of Energy 's nuclear laboratories have been known for widespread violations of security rules. During the Manhattan Project , physicist Richard Feynman was barred from entering certain nuclear facilities; he would crack safes and violate other rules as pranks to reveal deficiencies in security. [ 40 ] A deliberate fire caused between $5m and $10m worth of damage to New York's Indian Point Energy Center in 1971. The arsonist turned out to be a plant maintenance worker. Sabotage by workers has been reported at many other reactors in the United States: at Zion Nuclear Power Station (1974), Quad Cities Nuclear Generating Station , Peach Bottom Nuclear Generating Station , Fort St. Vrain Generating Station , Trojan Nuclear Power Plant (1974), Browns Ferry Nuclear Power Plant (1980), and Beaver Valley Nuclear Generating Station (1981). Many reactors overseas have also reported sabotage by workers. Suspected arson has occurred in the United States and overseas. [ 22 ] In 1998 a group of workers at one of Russia's largest nuclear weapons facilities attempted to steal 18.5 kilograms of HEU—enough for a bomb. [ 22 ] It can be argued that Pakistan's whole nuclear program was jump-started due to sabotage by insiders. Following India's first nuclear weapons test, URENCO scientist A.Q. Khan wrote a letter to the Pakistani Prime Minister, Zulfiqar Ali Bhutto , offering to help start a nuclear weapons program for his home country. Soon after their conversations, Khan started delivering instructions and blueprints to Pakistan, which he got access to through his work translating the sophisticated G-1 and G-2 centrifuge designs from German to Dutch. Khan also acquired the essential expertise for running centrifuge operations from URENCO, which he would later relay back to scientists in Pakistan. When his coworkers at URENCO started to suspect something was going on, Khan had already fled back to his guaranteed safety in Pakistan. After just six years, Khan said his plants were “producing substantial quantities of uranium”. [ 41 ] Due to his help getting Pakistan the blueprints needed to start enriching uranium within their borders, Khan is widely regarded to as "the father of Pakistan’s nuclear weapons program". [ 42 ] Various acts of civil disobedience since 1980 by the peace group Plowshares have shown how nuclear weapons facilities can be penetrated, and the group's actions represent extraordinary breaches of security at nuclear weapons plants in the United States. On July 28, 2012, three members of Plowshares cut through fences at the Y-12 National Security Complex in Oak Ridge, Tennessee, which manufactures US nuclear weapons and stockpiles highly enriched uranium. The group spray-painted protest messages, hung banners, and splashed blood . [ 3 ] The National Nuclear Security Administration has acknowledged the seriousness of the 2012 Plowshares action, which involved the protesters walking into a high-security zone of the plant, calling the security breach "unprecedented." Independent security contractor, WSI, has since had a weeklong "security stand-down," a halt to weapons production, and mandatory refresher training for all security staff. [ 3 ] Non-proliferation policy experts are concerned about the relative ease with which these unarmed, unsophisticated protesters could cut through a fence and walk into the center of the facility. This is further evidence that nuclear security—the securing of highly enriched uranium and plutonium—should be a top priority to prevent terrorist groups from acquiring nuclear bomb-making material. These experts have questioned "the use of private contractors to provide security at facilities that manufacture and store the government's most dangerous military material". [ 3 ] In 2010, there was a security breach at a Belgian Air Force base which possessed U.S. nuclear warheads. The incident involved six anti-nuclear activists entering Kleine Brogel Air Base . The activists stayed in the snow-covered base for about 20 minutes, before being arrested. A similar event occurred in 2009. [ 43 ] On December 5, 2011, two anti-nuclear campaigners breached the perimeter of the Cruas Nuclear Power Plant in France, escaping detection for more than 14 hours, while posting videos of their sit-in on the internet. [ 44 ] Stuxnet is a computer worm discovered in June 2010 that is believed to have been created by the United States and Israel to attack Iran's nuclear facilities. [ 8 ] It switched off safety devices, causing centrifuges to spin out of control. Stuxnet initially spreads via Microsoft Windows , and targets Siemens industrial control systems . While it is not the first time that hackers have targeted industrial systems, [ 45 ] it is the first discovered malware that spies on and subverts industrial systems, [ 46 ] and the first to include a programmable logic controller (PLC) rootkit . [ 47 ] [ 48 ] Different variants of Stuxnet targeted five Iranian organizations, [ 49 ] with the probable target widely suspected to be uranium enrichment infrastructure in Iran ; [ 50 ] [ 51 ] Symantec noted in August 2010 that 60% of the infected computers worldwide were in Iran. [ 52 ] Siemens stated that the worm has not caused any damage to its customers, [ 53 ] but the Iran nuclear program, which uses embargoed Siemens equipment procured secretly, has been damaged by Stuxnet. [ 54 ] [ 55 ] Kaspersky Lab concluded that the sophisticated attack could only have been conducted "with nation-state support". [ 56 ] Idaho National Laboratory ran the Aurora Experiment in 2007 to demonstrate how a cyber attack could destroy physical components of the electric grid. [ 57 ] The experiment used a computer program to rapidly open and close a diesel generator 's circuit breakers out of phase from the rest of the grid and explode. This vulnerability is referred to as the Aurora Vulnerability . The number and sophistication of cyber attacks is on the rise. The computers of South Korea 's nuclear plant operator ( KHNP ) were hacked in December 2014. The cyber attacks involved thousands of phishing emails containing malicious code, and information was stolen. [ 9 ] Nothing important was hacked at the plant, so the group was unable to threaten the operation of the reactor. Releasing personnel files and business data doesn’t compromise nuclear safety, even if it embarasses the company. [ 58 ] In December 2017 it was reported that the safety systems of an unidentified power station, believed to be in Saudi Arabia were compromised when the Triconex industrial safety technology made by Schneider Electric SE was targeted in what is believed to have been a state sponsored attack. The computer security company Symantec claimed that the malware, known as Triton exploited a vulnerability in computers running the Microsoft Windows operating system. [ 59 ] Population density is one critical lens through which risks have to be assessed, says Laurent Stricker, a nuclear engineer and chairman of the World Association of Nuclear Operators : [ 60 ] The KANUPP plant in Karachi, Pakistan , has the most people—8.2 million—living within 30 kilometres, although it has just one relatively small reactor with an output of 125 megawatts. Next in the league, however, are much larger plants— Taiwan 's 1,933-megawatt Kuosheng plant with 5.5 million people within a 30-kilometre radius and the 1,208-megawatt Chin Shan plant with 4.7 million; both zones include the capital city of Taipei . [ 60 ] 172,000 people living within a 30 kilometre radius of the Fukushima Daiichi nuclear power plant have been forced or advised to evacuate the area. More generally, a 2011 analysis by Nature and Columbia University shows that some 21 nuclear plants have populations larger than 1 million within a 30-km radius, and six plants have populations larger than 3 million within that radius. [ 60 ] However, government plans for remote siting of nuclear plants in rural areas, and the transmission of electricity by high-voltage direct current lines to industrial regions would enhance safety and security. On the other hand, nuclear plant security would be at elevated risk during a natural or man-made electromagnetic pulse event, and the ensuing civil disorder in surrounding areas. In his book Normal Accidents , Charles Perrow says that multiple and unexpected failures are built into society's complex and tightly coupled nuclear reactor systems. Such accidents are unavoidable and cannot be designed around. [ 61 ] In the 2003 book Brittle Power , Amory Lovins talks about the need for a resilient, secure, energy system: The foundation of a secure energy system is to need less energy in the first place, then to get it from sources that are inherently invulnerable because they're diverse, dispersed, renewable, and mainly local. They're secure not because they're American but because of their design. Any highly centralised energy system—pipelines, nuclear plants, refineries—invite devastating attack. But invulnerable alternatives don't, and can't, fail on a large scale. [ 62 ]
https://en.wikipedia.org/wiki/Vulnerability_of_nuclear_facilities_to_attack
A vulnerable species is a species which has been categorized by the International Union for Conservation of Nature as being threatened with extinction unless the circumstances that are threatening its survival and reproduction improve. Vulnerability is mainly caused by habitat loss or destruction of the species' home. Vulnerable habitat or species are monitored and can become increasingly threatened. Some species listed as "vulnerable" may be common in captivity , an example being the military macaw . In 2012 there were 5,196 animals and 6,789 plants classified as vulnerable, compared with 2,815 and 3,222, respectively, in 1998. [ 1 ] Practices such as cryoconservation of animal genetic resources have been enforced in efforts to conserve vulnerable breeds of livestock specifically. The International Union for Conservation of Nature uses several criteria to enter species in this category. A taxon is Vulnerable when it is not critically endangered or Endangered but is facing a high risk of extinction in the wild in the medium-term future, as defined by any of the following criteria (A to E): A) Population reduction in the form of either of the following: B) Extent of occurrence estimated to be less than 20,000 km 2 or area of occupancy estimated to be less than 2,000 km 2 , and estimates indicating any two of the following: C) Population estimated to number fewer than 10,000 mature individuals and either: D) Population very small or restricted in the form of either of the following: E) Quantitative analysis showing the probability of extinction in the wild is at least 10% within 100 years. The examples of vulnerable animal species are hyacinth macaw , mountain zebra , gaur , black crowned crane and blue crane
https://en.wikipedia.org/wiki/Vulnerable_species
vx-underground , also known as VXUG , is an educational website about malware and cybersecurity. [ 1 ] [ 2 ] It claims to have the largest online repository of malware. [ 3 ] The site was launched in May, 2019 and has grown to host over 35 million pieces of malware samples. [ 1 ] [ 4 ] On their account on Twitter , VXUG reports on and verifies cybersecurity breaches. [ 5 ] Kim Crawley compared the site to VirusTotal and states that vx-underground is more susceptible to suspicion for law enforcement. [ 6 ] In May 2024, the International Baccalaureate organizations faced allegations over supposed breaches in their IT infrastructure after an incident of examination leaks . Upon inspecting leaked data, VXUG were the first to report that the breach seemed legitimate on the morning of May 6. [ 7 ] This computer security article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Vx-underground
Vyacheslav Vasilyevich Lebedinsky ( Russian : Вячеслав Васильевич Лебединский ; 14 September 1888 – 12 December 1956) was a Russian and Soviet chemist who worked on platinum, rhodium and iridium, their extraction and use in catalysis. He also worked on complex compounds of rhodium and iridium. He was also a noted teacher and guided 20 doctoral students in inorganic chemistry. Lebedinsky was born in Saint Petersburg . He graduated from high school in 1907 and went to St. Petersburg University. Graduating in 1913 with a thesis on anomalous rotatory dispersion he stayed on at the department of inorganic chemistry and studied under Lev Chugaev . He also examined complex metal chemistry and synthesized four forms of ammonium derivatives with trivalent rhodium. [ 1 ] He became a professor in 1920 and moved to Moscow in 1935 to work at the Moscow Institute of Non-Ferrous Metals and Gold. He developed a method for the extraction of thee metals from copper-nickel sludge, for which he received the Stalin Prize in 1946. He studied platinum catalysis for disinfection of drinking water, and the treatment of waste water. He worked on rhodium extraction and purification and the synthesis of complex compounds of rhenium and ethylene diamine. [ 2 ]
https://en.wikipedia.org/wiki/Vyacheslav_Lebedinsky
vzRoom is a software system developed by Manipeer Limited for multi-party video conferencing , media sharing and VoIP phone integration. It was launched in July 2008. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Each conference room supports up to 63 concurrent users . Each user has an own room to host individual videoconferencing meeting and all users can share voice , camera and other multimedia to the users in the videoconferencing room. The meeting will be coordinated by the host of the room. [ 10 ] vzRoom clients can connect to VoIP phone and merge the phone call with the conference room users.
https://en.wikipedia.org/wiki/VzRoom
In Indian calendrical systems, vāra (or, vāsara ) denotes the week-day . It is one of the five elements that constitute the traditional almanacs called Pañcāṅga -s the other four being Nakshatra , Tithi , Karaṇa and Nityayoga . [ 1 ] The concept of week, the unit of time consisting of seven days, is indigenous to Indian civilisation. The concept was probably borrowed by Babylonians and its use predates the use of the twelve zodiacal signs in Indian civilazation. The concept finds mention in Atharva Veda . The seven week-days are named after the seven classical planets as in the ancient Greek and Roman traditions. [ 1 ] [ 2 ] [ 3 ] The historical rationale behind the current naming of the week-days is astrological in origin and it can be summarized as given below. Surya-Siddhānta and Āryabhaṭīya have also indicated this rationale. Sūrya Siddhānta , in Chapter XII Bhūgolādhyāya Verses 78–79, says: [ 2 ] Explanation of the rationale This rationale is reflected in one of the literal meanings of the Sanskrit word vāsara (another term for vāra ) which is "relating to or appearing in the morning". [ 4 ] The rationale behind the naming of the days of a week is certainly not of Indian origin. Also the concept of a seven-day week as a unit of time is not of Indian origin. The system of dividing a day into 24 hora -s is there in India only in the astrological literature. Works on astronomy like Surya-Siddhānta and Āryabhaṭīya do not mention hora as unit of time. In such works, the common practice is to divide day into 60 ghaṭi -s and each ghaṭi into 60 vighaṭi -s. Moreover, no work of the Vedic and the Vedāṅga period mentions it. Further, the word hora is not even of Sanskrit origin. Chaldeans had this unit in use since a long time and they did have a week of seven-days. Vāra -s were known to Chaldeans long before 3800 BCE. It is probably the case that the ancient Indian astronomers and astrologers borrowed the concept of vāra or week from the Chaldeans. [ 2 ] The Atharva Veda contains references to vāra . From evidences obtained from Atharva Jyotiṣa and Yājñavalkya Smṛti , it has been determined that the vāra -s began to be used in a period much earlier than the period when the 12 zodiacal signs began to be used. Thus, in the Indian subcontinent, the use of vara -s predates the use of the rāśi -s. The days of the week may have been introduced in India at about 1000 BCE and they are not more modern than 500 BCE. [ 1 ] [ 3 ] The names of the vāra -s in all of the 22 languages recognized by the Constitution of India [ 5 ] are given in the following table. For a longer list, see: Week-days in languages of the Indian subcontinent .
https://en.wikipedia.org/wiki/Vāra_(astronomy)
Hexamethyltungsten is the chemical compound W ( CH 3 ) 6 also written WMe 6 . Classified as a transition metal alkyl complex , hexamethyltungsten is an air-sensitive, red, crystalline solid at room temperature; however, it is extremely volatile and sublimes at −30 °C. Owing to its six methyl groups it is extremely soluble in petroleum , aromatic hydrocarbons , ethers , carbon disulfide , and carbon tetrachloride . [ 1 ] [ 2 ] Hexamethyltungsten was first reported in 1973 by Wilkinson and Shortland, who described its preparation by the reaction of methyllithium with tungsten hexachloride in diethyl ether . [ 1 ] The synthesis was motivated in part by previous work which indicated that tetrahedral methyl transition metal compounds are thermally unstable, in the hopes that an octahedral methyl compound would prove to be more robust. In 1976, Wilkinson and Galyer disclosed an improved synthesis using trimethylaluminium in conjunction with trimethylamine , instead of methyllithium. [ 3 ] The stoichiometry of the improved synthesis is as follows: Alternatively, the alkylation can employ dimethylzinc : [ 4 ] W(CH 3 ) 6 adopts a distorted trigonal prismatic geometry with C 3 v symmetry for the WC 6 framework and C 3 symmetry including the hydrogen atoms. The structure (excluding the hydrogen atoms) can be thought of as consisting of a central atom, capped on either side by two eclipsing sets of three carbon atoms, with one triangular set slightly larger but also closer to the central atom than the other. The trigonal prismatic geometry is unusual in that the vast majority of six-coordinate organometallic compounds adopt octahedral molecular geometry . In the initial report, the IR spectroscopy results were interpreted in terms of an octahedral structure. In 1978, a study using photoelectron spectroscopy appeared to confirm the initial assignment of an O h structure. [ 5 ] The octahedral assignment remained for nearly 20 years until 1989 when Girolami and Morse showed that [Zr(CH 3 ) 6 ] 2− was trigonal prismatic as indicated by X-ray crystallography . [ 6 ] They interpreted the non-octahedral structure as the result of a second-order Jahn-Teller effect, and predicted that other d 0 ML 6 species such as [Nb(CH 3 ) 6 ] − , [Ta(CH 3 ) 6 ] − , and W(CH 3 ) 6 would also prove to be trigonal prismatic. This report prompted other investigations into the structure of W(CH 3 ) 6 . Using gas-phase electron diffraction , Volden et al. confirmed that W(CH 3 ) 6 is indeed trigonal prismatic structure with either D 3 h or C 3 v symmetry. [ 7 ] In 1996, Seppelt et al. reported that W(CH 3 ) 6 had a strongly distorted trigonal prismatic coordination geometry based on single-crystal X-ray diffraction , which they later confirmed in 1998. [ 4 ] [ 8 ] As shown in the top figure at right, the ideal or D 3 h trigonal prism in which all six carbon atoms are equivalent is distorted to the C 3v structure observed by Seppelt et al. by opening up one set of three methyl groups (upper triangle) to wider C-W-C angles (94-97°) with slightly shorter C-W bond lengths, while closing the other set of three methyls (lower triangle) to 75-78° with longer bond lengths. As suggested originally by Girolami in 1989, [ 6 ] deviation from octahedral geometry can be ascribed to a second-order Jahn-Teller distortion . [ 9 ] [ 10 ] In 1995, before the work of Seppelt and Pfennig but after Girolami's work, Landis and coworkers predicted a distorted trigonal prismatic structure based on valence bond theory and VALBOND calculations. [ 11 ] [ 12 ] The history of the structure of W(CH 3 ) 6 illustrates an inherent difficulty in interpreting spectral data for new compounds: initial data may not provide reason to believe the structure deviates from a presumed geometry based on significant historical precedence, but there is always the possibility that the initial assignment will prove to be incorrect. Prior to 1989, there was no reason to suspect that ML 6 compounds were anything but octahedral , yet new evidence and improved characterization methods suggested that perhaps there were exceptions to the rule, as evidenced by the case of W(CH 3 ) 6 . These discoveries helped to spawn re-evaluation of the theoretical considerations for ML 6 geometries. Other 6-coordinate complexes with distorted trigonal prismatic structures include [MoMe 6 ], [NbMe 6 ] − , and [TaPh 6 ] − . All are d 0 complexes. Some 6-coordinate complexes with regular trigonal prismatic structures (D 3h symmetry) include [ReMe 6 ] (d 1 ), [TaMe 6 ] − (d 0 ), and the aforementioned [ZrMe 6 ] 2− (d 0 ). [ 13 ] At room temperature , hexamethyltungsten decomposes , releasing methane and trace amounts of ethane . The black residue is purported to contain polymethylene and tungsten, but the decomposition of W(CH 3 ) 6 to form tungsten metal is highly unlikely. [ citation needed ] The following equation is the approximate stoichiometry proposed by Wilkinson and Shortland: [ 1 ] Like many organometallic complexes, WMe 6 is destroyed by oxygen . Similarly, acids give methane and unidentified tungsten derivatives, while halogens give the methyl halide and leave the tungsten halide. A patent application was submitted in 1991 suggesting the use of W(CH 3 ) 6 in the manufacture of semiconductor devices for chemical vapor deposition of tungsten thin films ; [ 14 ] however, to date it has not been used for this purpose. Rather, tungsten hexafluoride and hydrogen are used instead. [ 15 ] Treatment of W(CH 3 ) 6 with F 2 diluted with Ne at −90 °C affords W(CF 3 ) 6 in 50% yield as an extremely volatile white solid. [ 16 ] Hexamethyltungsten(VI) reacts with trimethylphosphine in light petroleum to give WMe 6 (PMe 3 ), which in neat PMe 3 , with U.V. irradiation gives the carbyne complex trans -WMe(:::CMe) (PMe 3 ) 4 in high yield. Serious explosions have been reported as a result of working with W(CH 3 ) 6 , even in the absence of air. [ 5 ] [ 17 ]
https://en.wikipedia.org/wiki/W(CH3)6
The W. David Kingery Award is an award presented annually by the American Ceramic Society (ACerS) to individuals who have made significant lifelong contributions to the field of ceramic science and engineering . [ 1 ] The award is named in honor of W. David Kingery , a prominent figure in ceramics research, and is one of the highest honors bestowed in the ceramics community, celebrating sustained excellence in research, leadership, and education over the course of a career. [ 2 ] The W. David Kingery Award was established in 1998 by ACerS to honor the memory and contributions of W. David Kingery, whose work transformed the field of ceramics. Kingery is often referred to as the "father of modern ceramics" due to his research in ceramic processing, especially in sintering , a process critical to the formation of dense ceramic bodies from powders. [ 3 ] His interdisciplinary approach, which combined elements of materials science , chemistry , and physics , revolutionized the manufacturing and application of ceramic materials. Kingery's research extended beyond basic science to include practical applications, from high-performance materials used in aerospace and electronics to advanced ceramic technologies in energy production and medicine. [ 4 ] His influence as an educator was equally impactful, having authored several foundational textbooks in ceramics and materials science, including the influential Introduction to Ceramics . [ 5 ] Throughout his career, Kingery was a prominent advocate for the advancement of ceramic engineering and education, mentoring many future leaders in the field. The award is conferred based on a rigorous evaluation of the nominee's career achievements. [ 6 ] It recognizes individuals who have demonstrated sustained excellence and made significant, long-term contributions to the field of ceramics, which may include, but are not limited to: While the award is open to candidates from both academic and industrial sectors, recipients typically have a body of work that spans decades, influencing not only their own area of expertise but also the broader ceramics community. [ 7 ] The award reflects both individual accomplishment and contributions that benefit society as a whole through the advancement of ceramic technology. Many recipients of the W. David Kingery Award have been recognized for their pioneering research and contributions to ceramics, both in academic and industrial settings. These individuals have made advancements in areas such as ceramic processing, high-temperature materials , sintering technologies, and the development of ceramic materials for structural, electronic, and biomedical applications . [ 8 ]
https://en.wikipedia.org/wiki/W._David_Kingery_Award
William Ernest Stephen Turner (22 September 1881 – 27 October 1963) was a British chemist and pioneer of scientific glass technology. [ 1 ] [ 2 ] Turner was born in Wednesbury , Staffordshire on 22 September 1881. He went to King Edward VI Grammar School , Five Ways, Birmingham , and achieved a BSc (1902) and MSc (1904) in chemistry at the University of Birmingham . [ 1 ] He married Mary Isobell Marshall (died 1939) and they had 4 children. [ 1 ] In 1904, he joined the University College of Sheffield as a lecturer, and, in 1915, established the Department of Glass Manufacture, becoming in 1916 the Department of Glass Technology. He remained as its head until his retirement in 1945. [ 1 ] In 1943, he married Helen Nairn Munro, an artist noted for her glass engraving, and a teacher of glass decoration at the Edinburgh College of Art . [ 1 ] She was provided with a blue dress and shoes in glass fibre cloth (which was then an unusual industrial material). This has been selected as one of the items in the BBC 's A History of the World in 100 Objects . [ 3 ] The same year, he established a collection of historical and modern glass which became the Turner Museum of Glass from his extensive collection, and the wedding dress is on display there. [ 3 ] [ 4 ] He died on 27 October 1963. [ 2 ] From 1904 to 1914, he published 21 papers on physical chemistry, mainly on molecular weights in solution. However, the bulk of his work from 1917 to 1954 was on the chemistry and technology of glass. Following his retirement, he produced an extensive series on the history of glass technology and on glass in archeology . Apart from this, in 1909, he wrote a series of articles in the Sheffield Daily Telegraph about the scientist in industry, in which cooperation with universities was urged. [ 1 ] His early career was strictly academic, largely dealing with the associations of molecules in the liquid state. However, as his articles in the local newspaper showed, he was interested in the application of science to practical industrial problems, and this became the main theme of his work. The beginning of the First World War cut off metallurgical supplies from Germany and Austria , and Turner proposed that the University should help British industry. The work in metallurgy led to enquiries about glass, and in 1915 Turner produced a 'Report on the glass industry of Yorkshire', noting that this was largely unscientific and rule of thumb in nature. He thereby persuaded the University to set up a Department of Glass Manufacture in 1915 for research and teaching where he remained for the rest of his career, becoming internationally known. The main thrust of his research was on a fundamental understanding of the relationship between the chemical composition and the working properties of glasses. [ 1 ] In 1916, he founded the Society of Glass Technology , becoming its first secretary. It published a Journal, which he edited until 1951. He was also involved in the formation of the International Commission on Glass . [ 2 ] Turner initially taught physical chemistry, and in 1905 started specific courses for metallurgists. This involvement led him to become President of the Sheffield Society of Applied Metallurgy in 1914. In 1915, the Department of Glass Manufacture began an outreach programme, providing short courses to industry in Mexborough , Barnsley , Castleford and Knottingley in addition to Saturday classes in Sheffield. These were extended to glass making centres in Derby , Alloa , Glasgow and London . From 1917, full-time day students entered for what became a Bachelor of Technical Science degree. During the Second World War , Turner and other staff of the department provided technical lectures to industries such as those making glass electronic vacuum tubes . [ 1 ] He was appointed an Officer of the Order of the British Empire in the 1919 New Year Honours [ 5 ] for application of science to the glass industry, and in 1938 was appointed a Fellow of the Royal Society . He was the only person outside Germany to receive the Otto Schott Medal. [ 1 ]
https://en.wikipedia.org/wiki/W._E._S._Turner
Tungsten(III) oxide (W 2 O 3 ) is a compound of tungsten and oxygen . It has been reported (2006) as being grown as a thin film by atomic layer deposition at temperatures between 140 and 240 °C using W 2 (N(CH 3 ) 2 ) 6 as a precursor. [ 1 ] It is not referred to in major textbooks. [ 2 ] [ 3 ] Some older literature refers to the compound W 2 O 3 but as the atomic weight of tungsten was believed at the time to be 92 (i.e., approximately half the modern accepted value of 183.84) the compound actually being referred to was WO 3 . [ 4 ] Reports about the compound date back to at least the 1970s, but only in as thin films or surfaces – no bulk synthesis of the material is known. [ 5 ] Tungsten(III) oxide is used in various types of infrared absorbing coatings and foils. [ 6 ] This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/W2O3
W3Schools is a freemium educational website for learning coding online. [ 1 ] [ 2 ] Initially released in 1998, it derives its name from the World Wide Web but is not affiliated with the W3 Consortium . [ 3 ] [ 4 ] [ unreliable source ] W3Schools offers courses covering many aspects of web development. [ 5 ] W3Schools also publishes free HTML templates. It is run by Refsnes Data in Norway . [ 6 ] It has an online text editor called TryIt Editor, and readers can edit examples and run the code in a test environment. The website also offers free hosting for small static websites. On the site, source code examples with explanations are shown free of charge in English , most of which can also be edited and executed interactively in a live editor. Other important code elements are hidden so that the user can focus on the code shown (developer sandbox). The tutorials are divided into individual chapters on the development languages. In addition to the basics, application-related implementation options and examples, as well as a focus on individual elements of the programming language (so-called "references") are documented. In addition, there is a YouTube channel, which takes up and explains certain topics in web development, and an Internet forum. Technologies such as HTML , CSS , JavaScript , JSON , C , C++ , C# , Java , PHP , React , AngularJS , SQL , Python , Django , Bootstrap , Node.js , jQuery , XQuery , Ajax , and XML are all supported. [ 7 ] [ 8 ] [ 9 ] This article about an educational website is a stub . You can help Wikipedia by expanding it . This World Wide Web –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/W3Schools
Tungsten(II) chloride is the inorganic compound with the formula W 6 Cl 12 . It is a polymeric cluster compound . The material dissolves in concentrated hydrochloric acid , forming (H 3 O) 2 [W 6 Cl 14 ](H 2 O) x . Heating this salt gives yellow-brown W 6 Cl 12 . [ 1 ] The structural chemistry resembles that observed for molybdenum(II) chloride . Tungsten(II) chloride is prepared by reduction of the hexachloride. Bismuth is a typical reductant:
https://en.wikipedia.org/wiki/W6Cl12
WAN optimization is a collection of techniques for improving data transfer across wide area networks (WANs). In 2008, the WAN optimization market was estimated to be $1 billion, [ 1 ] and was to grow to $4.4 billion by 2014 according to Gartner , [ 2 ] a technology research firm. In 2015 Gartner estimated the WAN optimization market to be a $1.1 billion market. [ 3 ] The most common measures of TCP data-transfer efficiencies (i.e., optimization) are throughput, bandwidth requirements, latency, protocol optimization, and congestion, as manifested in dropped packets. [ 4 ] In addition, the WAN itself can be classified with regards to the distance between endpoints and the amounts of data transferred. Two common business WAN topologies are Branch to Headquarters and Data Center to Data Center (DC2DC). In general, "Branch" WAN links are closer, use less bandwidth, support more simultaneous connections, support smaller connections and more short-lived connections, and handle a greater variety of protocols. They are used for business applications such as email, content management systems, database application, and Web delivery. In comparison, "DC2DC" WAN links tend to require more bandwidth, are more distant, and involve fewer connections, but those connections are bigger (100 Mbit/s to 1 Gbit/s flows) and of longer duration. Traffic on a "DC2DC" WAN may include replication, back up, data migration , virtualization, and other Business Continuity / Disaster Recovery (BC/DR) flows. WAN optimization has been the subject of extensive academic research almost since the advent of the WAN. [ 5 ] In the early 2000s, research in both the private and public sectors turned to improving the end-to-end throughput of TCP, [ 6 ] and the target of the first proprietary WAN optimization solutions was the Branch WAN. In recent years, however, the rapid growth of digital data, and the concomitant needs to store and protect it, has presented a need for DC2DC WAN optimization. For example, such optimizations can be performed to increase overall network capacity utilization, [ 7 ] [ 8 ] meet inter-datacenter transfer deadlines, [ 9 ] [ 10 ] [ 11 ] or minimize average completion times of data transfers. [ 11 ] [ 12 ] As another example, private inter-datacenter WANs can benefit optimizations for fast and efficient geo-replication of data and content, such as newly computed machine learning models or multimedia content. [ 13 ] [ 14 ] Component techniques of Branch WAN Optimization include deduplication, wide area file services (WAFS), SMB proxy, HTTPS Proxy , media multicasting , web caching , and bandwidth management . Requirements for DC2DC WAN Optimization also center around deduplication and TCP acceleration, however these must occur in the context of multi-gigabit data transfer rates.
https://en.wikipedia.org/wiki/WAN_optimization
WAP billing is a mechanism for consumers to buy content from Wireless Application Protocol (WAP) sites that is charged directly to their mobile phone bill. It is an alternative payment mechanism to debit or credit cards and premium SMS for billing. [ 1 ] [ 2 ] Using WAP billing, consumers can buy mobile content without registering for a service or entering a username or password. [ 3 ] The user clicks on a link and agrees to make a purchase, after which they can download content. WAP billing is particularly associated with downloading mobile entertainment content like ringtones, mobile games and wallpapers. Some commentators have suggested it could compete with Premium SMS as a leading payment channel for mobile content. [ 2 ] [ 4 ] [ 5 ] WAP billing works with WAP-enabled mobile phones over a GPRS or 3G wireless connection. [ 6 ] The customer initiates a WAP session with the content service provider by browsing a WAP page, for example. [ 4 ] The WAP site hoster obtains the visitor's MSISDN without the visitor having to register on a specific WAP gateway or service. This information is provided through integration with the operator's own MSISDN lookup service. Consumers confirm a purchase by clicking on a 'confirm purchase' link on their mobile phone and the WAP billing platform informs the WAP application of the completed purchase transaction. Consumers are redirected to the content they have purchased. [ 7 ] The purchases are recorded and billed directly via the mobile phone bill using the MSISDN. [ 4 ] Attempts have been made to create a single, cross-operator WAP billing platform that can support the purchase of products on any mobile network. [ 5 ] [ 8 ] [ 9 ] In other markets, like Ireland, WAP Billing is available across O2 and Vodafone mobile operators, via MSIDSN forwarding. WAP billing is only available to mobile aggregators operating in the Irish market on a case by case, business model basis, unlike the UK market. The benefits cited for WAP Billing include the ability to sell to minors who lack a credit card or bank account [ 10 ] and an improved customer experience including 'Single click' purchases where [ 9 ] [ 11 ] transactions are completed without consumers having to send or receive a text message [ 12 ] or remember shortcodes. [ 4 ] It has been claimed that WAP Billing also reduces the possibility of fraud when paying for mobile content. [ 4 ] Another benefit cited for WAP Billing is the assertion that users experience the same 'browse and buy' experience they are used to via their PC on the Internet. WAP billing lacks transparency to the customer. The act of signing a contract, handing out money to a human, reading, understanding and writing or at least typing something and thus also indirectly proving that you are a legitimate customer is reduced to a single touch. Even a child or a pet that occasionally touches the screen may trigger a purchase. Often a customer doesn't notice that they've actually paid for something or bought a subscription, until they look at their phone bill afterwards. To silently start billing with a single click is very inviting for malicious apps and malicious embedded ads. This misuse of WAP billing is a form of clickjacking . Once triggered it is hard to stop or cancel a payment. Normally there are three entities involved in the claim of the money: Thus, the service is indirectly paid through your phone bill. This makes it more complicated to deny the payment or to claim your money back. In 2013, the Federal Trade Commission settled with Jesta Digital LLC concerning unauthorized WAP billing charges. [ 13 ]
https://en.wikipedia.org/wiki/WAP_billing
A WAP gateway sits between mobile devices using the Wireless Application Protocol (WAP) and the World Wide Web , passing pages from one to the other much like a proxy. This translates pages into a form suitable for the mobiles, for instance using the Wireless Markup Language (WML). This process is hidden from the phone, so it may access the page in the same way as a browser accesses HTML, using a URL (for example, http://example.com/foo.wml), provided the mobile phone operator has not specifically prevented this. WAP gateway software encodes and decodes requests and responses between the smartphones, microbrowser and internet. It decodes the encoded WAP requests from the microbrowser and send the HTTP requests to the internet or to a local application server. It also encodes the WML and HDML data returning from the web for transmission to the microbrowser in the handset. This article related to telecommunications is a stub . You can help Wikipedia by expanding it . This mobile technology related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/WAP_gateway
WATIAC was a virtual computer developed for teaching the principles of assembly language programming to undergraduates. [ 1 ] [ 2 ] [ 3 ] WATIAC, and the WATMAP assembly language that ran on it were developed in 1973 by the newly founded Computer Systems Group , at the University of Waterloo , under the direction of Wes Graham . In the 1970s most programming was conducted through batch stream processing , where the operating systems of the day, like IBM `s OS-360 , would allow a single program to use all the resources of a large computer, for a limited period of time. [ 4 ] Since student programs were only run a few times, possibly only once, after they had been successfully written, and debugged, efficient running of those programs was of relatively little importance, compared with quick compilation and relatively good error messages. Waterloo had been a leader in writing single pass, compile-and-go teaching compilers, with first its WATFOR FORTRAN compiler, and its WATBOL COBOL compiler. [ 2 ] [ 3 ] WATMAP was developed to be a similar compile-and-go teaching compiler.
https://en.wikipedia.org/wiki/WATIAC
Tungsten hexabromide , also known as tungsten(VI) bromide , is a chemical compound of tungsten and bromine with the formula W Br 6 . It is an air-sensitive dark grey powder that decomposes above 200 °C to tungsten(V) bromide and bromine. [ 1 ] [ 3 ] Tungsten hexabromide is mainly produced by the reaction of metallic tungsten and bromine at temperatures around 100 °C in a nitrogen atmosphere: [ 1 ] [ 2 ] Another method of producing this compound is by the reaction of tungsten hexacarbonyl and bromine at room temperature, releasing carbon monoxide . [ 4 ] It can also be produced by the metathesis reaction of boron tribromide and tungsten hexachloride . [ 5 ] WBr 6 is reduced with elemental antimony at elevated temperatrues, consecutively producing, WBr 5 , WBr 4 , W 4 Br 10 , W 5 Br 12 , then finally WBr 2 at 350 °C. This reaction produces antimony tribromide as a side product. [ 4 ] [ 6 ] Any of these bromides can be reverted to the hexabromide by oxidation with bromine at 160 °C. [ 7 ] Tungsten hexabromide is hydrolyzed in water, producing tungsten pentoxide and releasing bromine. [ 1 ] Tungsten(VI) oxytetrabromide is produced by the reaction of tungsten hexabromide and tungsten(VI) oxide : [ 7 ] The trigonal crystal structure of WBr 6 consists of isolated WBr 6 octahedra and is isostructural with α- WCl 6 . [ 2 ]
https://en.wikipedia.org/wiki/WBr6
Tungsten hexachloride is an inorganic chemical compound of tungsten and chlorine with the chemical formula W Cl 6 . This dark violet-blue compound exists as volatile crystals under standard conditions . It is an important starting reagent in the preparation of tungsten compounds. [ 1 ] Other examples of charge-neutral hexachlorides are rhenium(VI) chloride and molybdenum(VI) chloride . The highly volatile tungsten hexafluoride is also known. As a d 0 atom, tungsten hexachloride is diamagnetic . Tungsten hexachloride can be prepared by chlorinating tungsten metal in a sealed tube at 600 °C: [ 2 ] Tungsten hexachloride exists in both blue and red polymorphs , referred to respectively as α and β. The wine-red β can be obtained by rapid cooling, whereas the blue α form is more stable at room temperature . Although these polymorphs are distinctly colored, their molecular structures are very similar. Both polymorphs feature W Cl 6 molecules that have octahedral geometry , in which all six W–Cl bonds are equivalent, and their length is equal to 224–226 pm . The densities are very similar: 3.68 g/cm 3 for α and 3.62 g/cm 3 for β. The low-temperature form is slightly more dense, as expected. [ 3 ] Tungsten hexachloride is readily hydrolyzed , even by moist air , giving the orange oxychlorides WOCl 4 and WO 2 Cl 2 , and subsequently, tungsten trioxide . WCl 6 is soluble in carbon disulfide , carbon tetrachloride , and phosphorus oxychloride . [ 2 ] Methylation with trimethylaluminium affords hexamethyl tungsten : Treatment with butyl lithium affords a reagent that is useful for deoxygenation of epoxides . [ 4 ] The chloride ligands in WCl 6 can be replaced by many anionic ligands including: bromide , thiocyanate , alkoxide , alkyl and aryl . Reduction of WCl 6 can be effected with a mixture of tetrachloroethylene and tetraphenylarsonium chloride : [ 5 ] The W(V) hexachloride is a derivative of tungsten(V) chloride . It reacts with arsenic or hydrogen arsenide to form tungsten arsenide . [ 6 ] [ 7 ] WCl 6 is an aggressively corrosive oxidant , and hydrolyzes to release hydrogen chloride .
https://en.wikipedia.org/wiki/WCI6
Tungsten dichloride dioxide , or tungstyl chloride is the chemical compound with the formula W O 2 Cl 2 . It is a yellow solid. It is used as a precursor to other tungsten compounds. Like other tungsten halides, WO 2 Cl 2 is sensitive to moisture, undergoing hydrolysis. WO 2 Cl 2 is prepared by ligand redistribution reaction from tungsten trioxide and tungsten hexachloride : Using a two-zone tube furnace , a vacuum-sealed tube containing these solids is heated to 350 °C. The yellow product sublimes to the cooler end of the reaction tube. No redox occurs in this process. [ 2 ] An alternative route highlights the oxophilicity of tungsten: [ 3 ] This reaction, like the preceding one, proceeds via the intermediacy of WOCl 4 . Gaseous tungsten dichloride dioxide is a monomer . [ 4 ] Solid tungsten dichloride dioxide is a polymer consisting of distorted octahedral W centres. The polymer is characterized by two short W-O distances, typical for a multiple W-O bond, and two long W-O distances more typical of a single or dative W-O bond. [ 5 ] Tungsten forms a number of oxyhalides including WOCl 4 , WOCl 3 , WOCl 2 . The corresponding bromides ( WOBr 4 , WOBr 3 , WOBr 2 ) are also known as is WO 2 I 2 . [ 6 ] WO 2 Cl 2 is a Lewis acid , forming soluble adducts of the type WO 2 Cl 2 L 2 , where L is a donor ligand such as bipyridine and dimethoxyethane . Such complexes often cannot be prepared by depolymerization of the inorganic solid, but are generated in situ from WOCl 4 . [ 7 ]
https://en.wikipedia.org/wiki/WCl2O2
Tungsten(IV) chloride is an inorganic compound with the formula WCl 4 . It is a diamagnetic black solid. The compound is of interest in research as one of a handful of binary tungsten chlorides . WCl 4 is usually prepared by reduction tungsten hexachloride . Many reductants have been reported, including red phosphorus, tungsten hexacarbonyl , gallium, tin, and antimony. The latter is reported to be optimal: [ 1 ] Like most binary metal halides, WCl 4 is polymeric. It consists of linear chains of tungsten atoms each in octahedral geometry. Of six chloride ligands attached to each W center, four are bridging ligands . The W-W separations are alternatingly bonding (2.688 Å) and nonbonding (3.787 Å). Reduction of tungsten(IV) chloride with sodium yields the ditungsten(III) heptachloride derivative: [ 2 ]
https://en.wikipedia.org/wiki/WCl4
Tungsten hexachloride is an inorganic chemical compound of tungsten and chlorine with the chemical formula W Cl 6 . This dark violet-blue compound exists as volatile crystals under standard conditions . It is an important starting reagent in the preparation of tungsten compounds. [ 1 ] Other examples of charge-neutral hexachlorides are rhenium(VI) chloride and molybdenum(VI) chloride . The highly volatile tungsten hexafluoride is also known. As a d 0 atom, tungsten hexachloride is diamagnetic . Tungsten hexachloride can be prepared by chlorinating tungsten metal in a sealed tube at 600 °C: [ 2 ] Tungsten hexachloride exists in both blue and red polymorphs , referred to respectively as α and β. The wine-red β can be obtained by rapid cooling, whereas the blue α form is more stable at room temperature . Although these polymorphs are distinctly colored, their molecular structures are very similar. Both polymorphs feature W Cl 6 molecules that have octahedral geometry , in which all six W–Cl bonds are equivalent, and their length is equal to 224–226 pm . The densities are very similar: 3.68 g/cm 3 for α and 3.62 g/cm 3 for β. The low-temperature form is slightly more dense, as expected. [ 3 ] Tungsten hexachloride is readily hydrolyzed , even by moist air , giving the orange oxychlorides WOCl 4 and WO 2 Cl 2 , and subsequently, tungsten trioxide . WCl 6 is soluble in carbon disulfide , carbon tetrachloride , and phosphorus oxychloride . [ 2 ] Methylation with trimethylaluminium affords hexamethyl tungsten : Treatment with butyl lithium affords a reagent that is useful for deoxygenation of epoxides . [ 4 ] The chloride ligands in WCl 6 can be replaced by many anionic ligands including: bromide , thiocyanate , alkoxide , alkyl and aryl . Reduction of WCl 6 can be effected with a mixture of tetrachloroethylene and tetraphenylarsonium chloride : [ 5 ] The W(V) hexachloride is a derivative of tungsten(V) chloride . It reacts with arsenic or hydrogen arsenide to form tungsten arsenide . [ 6 ] [ 7 ] WCl 6 is an aggressively corrosive oxidant , and hydrolyzes to release hydrogen chloride .
https://en.wikipedia.org/wiki/WCl6
WD 0816–310 (PM J08186–3110) is a magnetic white dwarf with metal pollution, originating from the tidal disruption of a planetary body. The metals are guided by the magnetic field onto the surface of the white dwarf, creating a "scar" on the surface of the white dwarf. This scar is rich in the accreted planetary material. [ 4 ] [ 3 ] The object was first identified as a possible white dwarf in 2005, from data of the Digitized Sky Survey . [ 5 ] It was confirmed as a white dwarf in 2008 with spectroscopic data from CTIO and the same team found that the white dwarf is polluted with calcium , magnesium and iron . [ 6 ] In 2019 a variable magnetic field was discovered thanks to Zeeman splitting . This observation was made with archived spectropolarimetric data from FORS1 at the Very Large Telescope (VLT). [ 7 ] In 2021 the white dwarf was studied in detail with the 4 m telescope at CTIO, and with the VLT (FORS1 and X-shooter). The elements sodium , magnesium, calcium, chromium , manganese , iron and nickel were detected in the atmosphere of the white dwarf. The atmosphere is enriched in magnesium, relative to other elements, which is predicted for old stellar systems. The researchers also found hydrogen in this otherwise helium -dominated atmosphere of WD 0816–310. The presence of hydrogen could be explained with the pollution of an asteroid containing water ice . These researchers found that the abundance of metals changed between two spectra 10 years apart. They suggested that spots enriched in metals are present on the surface of the white dwarfs, a process controlled by the magnetic field of the white dwarfs. [ 1 ] In 2024 this was confirmed with circular spectropolarimetric observations with FORS2 on the VLT. The observations measured a dipolar field strength at the pole of about 140 Kilogauss . Around 310,000 years ago WD 0816–310 accreted a Vesta -sized object with a composition similar to chondritic meteorites . [ 3 ] The observations showed that the variation metal line strength and magnetic field intensity are synchronized. This is seen as evidence that the magnetic field determines the local density of metals on the surface. These patches are likely present near one of the magnetic poles of the white dwarf. The material from an accreted asteroid will first form a disk around the white dwarf. Closer to the white dwarf the dusty material will sublimate into a metal-gas. The researchers claim that white dwarf will ionize at least a part of the gas. These ions will follow the magnetic field of the white dwarf and as a result of the Lorentz force it will follow a spiral orbit around the local field line. On their way to the poles of the white dwarf, the ions will collide with neutral atoms in the gas disk, ionizing them in the process. This leads to a substantial level of ionization of the gas disk. [ 3 ] A study in 2024 that discovered the second metal scar around WD 2138-332 , suggests that metal scars are common around magnetic white dwarfs with metal pollution. [ 8 ]
https://en.wikipedia.org/wiki/WD_0816–310
WD J2147–4035 (DES J214756.46-403529.3) is a very cold white dwarf with a temperature of about 3,050 Kelvin (2780 °C; 5030 °F). It also shows signs of pollution with planetary debris. [ 1 ] WD J2147–4035 was first identified from Gaia data as a white dwarf candidate in 2019. [ 3 ] In 2021 it was pointed out as an unusual faint white dwarf in the solar neighbourhood. The researchers found it could be extremely old (about 10 Gyrs). [ 4 ] In 2022 results from observations with X-shooter on the Very Large Telescope were published. The object was identified as a white dwarf, likely with a helium -dominated atmosphere. The researchers also detected metal pollution in the form of sodium , lithium , potassium and possibly carbon . The lithium line shows Zeeman splitting , which indicates that WD J2147–4035 is a magnetic white dwarf. The researchers measure a magnetic field strength of 0.55 ±0.03 Megagauss . The magnetism can lead to inhomogeneous brightness distribution and the TESS light curve shows that the white dwarf has a rotation period of around 13 hours. The nature of the accreted parent body is unclear as of September 2024. [ 1 ] WD J2147–4035 was once a main-sequence star with a mass of 2.47 ±0.22 M ☉ , which had a lifetime of about 500 Myrs . Once the star became an AGB star , it lost mass and became a white dwarf with a mass of 0.69 ±0.02 M ☉ . The white dwarf existed for 10.21 ±0.22 Gyrs, meaning the total age is 10.7 ±0.3 Gyrs. [ 1 ] Cold white dwarfs are often strongly affected by collision induced absorption (CIA) of hydrogen . This can lead to faint optical red and infrared brightness. These white dwarfs are also called IR-faint white dwarfs . WD J2147–4035 is however very red (r-z=2.29 mag) which is seen as evidence that it only has a low hydrogen to helium ratio, resulting in very mild CIA and therefore giving it the distinct orange color. [ 1 ] Other cool metal-polluted white dwarfs:
https://en.wikipedia.org/wiki/WD_J2147–4035
WEAP (the Water Evaluation and Planning system ) is a model-building tool for water resource planning and policy analysis [ 1 ] that is distributed at no charge to non-profit, academic, and governmental organizations in developing countries. WEAP is used to create simulations of water demand , supply , runoff, evapotranspiration , water allocation, infiltration , crop irrigation requirements, instream flow requirements, ecosystem services , groundwater and surface storage, reservoir operations, pollution generation, treatment, discharge, and instream water quality . The simulations can be created under scenarios of varying policy, hydrology , climate, land use, technology, and socio-economic factors. [ 2 ] WEAP links to the USGS MODFLOW groundwater flow model and the US EPA QUAL2K surface water quality model. WEAP was created in 1988 and continues to be developed and supported by the U.S. center of the Stockholm Environment Institute , a non-profit research institute based at Tufts University in Somerville, Massachusetts . It is used for climate change vulnerability studies and adaptation planning and has been applied by researchers and planners in thousands of organizations worldwide. Establishing the ‘current accounts’ and building scenarios and evaluating the scenarios about criteria are the main WEAP applications in Simulation problems. [ 3 ]
https://en.wikipedia.org/wiki/WEAP
WELL Building Standard ( WELL ) is a healthy building certification program, developed by the International WELL Building Institute PCB (IWBI), a California registered public benefit corporation. The WELL Building Standard began in 2013 by Paul Scialla of Delos company, becoming the first well-being focused standard. By 2016, over 200 projects in 21 countries adopted the certification. [ 1 ] In 2014, Green Business Certification Inc. began to provide third-party certification for WELL. By 2024, WELL is being used across more than 5 billion square feet of space in 130 countries, supporting an estimated 25 million occupants in nearly 74,000 commercial and residential locations. [ 2 ] WELL v2 met best practices on four tenets: evidence-based, verifiable, implementable, and feedback-focused. The principles in WELL v2 are equitable, global, evidence-based, technically robust, customer-focused, and resilient. WELL is a performance-based system which Performance Verification is completed by an authorized WELL Performance Testing Agent. There are two types of certifications, the WELL Certification for Owner-occupied and the WELL Core Certification. The WELL Core is for the building that provide tenant occupation more than 75%, and not needed to achieve minimum points from every subjects. [ 3 ] The WELL Silver, Gold, and Platinum level must achieve at least 1, 2, and 3 points per subject, but the WELL Bronze has no minimum points' rule. For WELL core, there are no minimum point. [ 3 ] The optimization point requirement from the WELL Bronze, Silver, Gold, and Platinum are ranging from 40, 50, 60, and 80 points. The rating system limits 12 points per subject, except the Innovation which limit under 10 points. Total points must not be over 100. [ 3 ] The WELL certificated buildings must pass all precondition requirements and they could get optimization points available from extension subjects. [ 3 ] Because users spend 90% time in interior, they can expose to indoor air pollutions that lead to headaches, dry throat, eye irritation, runny nose, asthma attacks, infection with legionella bacteria and carbon monoxide poisoning. It leads to thousands of cancer death and 100,000 of respiratory issues annually. Avoidable costs in the U.S. could be over 100 billion dollars annually 45% from radon and tobacco and 45% from lost productivity, and 10% respiratory diseases. Combustion sources such as candles, tobacco, stoves, furnaces, fireplaces producing carbon monoxide , nitrogen dioxide , and small particles are common. Furnishings, fabrics, cleaning product emit volatile organic compound (VOCs). It could be resolved by eliminate problem sources and design solutions. Air pollution leads to 7 millions premature deaths, around 600,000 of those were children under 5 years old in 2012. A01 Air Quality topic, the WELL conducts to limit level of particulate matter both PM2.5 and PM10 under 15 and 50 for normal region or 25 and 50 microgram / cubic metre or 30% of 24–48 hours average of outdoor levels for polluted region and thresholds for volatile organic compound (VOC) such as benzene , formaldehyde , toluene to 10, 50, and 300 microgram/cubic metre or total VOC of 500 microgram/cubic metre. Inorganic gases such as carbon monoxide and ozone are also limited to 10 milligram per cubic metre and 100 microgram/cubic metre. Radon is limited under 0.15 Becquerel /Litre. WELL makes sure that all air quality shall be monitored with a digital platform, except for radon parameter. [ 3 ] A02 Smoke-free Environment topic, smoking and using of electronic cigarette indoor is not allowed, except for outdoors at only ground level further than 7.5 m from project apertures including air-intake area. [ 3 ] A03 Ventilation Design topic, the WELL assured building to have existing or new mechanical ventilation systems following ASHRAE 62.1-2 or EN standard 16798-1 or AS 1668.2 or CIBSE Guide A: Environmental Design. Naturally ventilation can also be used without mechanical ventilation system if the design follows Natural Ventilation Procedure in ASHRAE 62.1, CIBSE AM10, AS 1668.4 at least 90% of the project area. Ventilation monitoring can be only solution if carbon dioxide level is met under 900 ppm indoor and 500 ppm outdoor. [ 3 ] A04 Construction Pollution Management topic, the contractor is to ensure that ducts are cleaned and protected from contamination . Contractor shall filter with more than 70% efficiency for particles 3-10 micrometers on the installed ventilation system during construction. Contractor must implement dust and moisture management such as using of temporary barriers, dust guards for saws, walk-off mats on entryway. [ 3 ] A05 Enhanced Air Quality topic, enhancing threshold of particulate matter both PM2.5 and PM10 under 12 and 30 for 1 point, 10 and 20 for 2 points. Enhancing volatile organic compound such as benzene , formaldehyde , toluene under 3, 9, 300 microgram/cubic metre and additional acetaldehyde , acrylonitrile , caprolactam , naphthalene under 140, 5, 2.2, 9 microgram/cubic metre would receive 1 point. Inorganic gases such as carbon monoxide and nitrogen dioxide under 7 milligram per cubic metre and 40 microgram per cubic metre receives additional 1 point. A06 Enhanced Ventilation Design topic threshold of mechanical and natural ventilation or with demand controlled ventilation (DCV) or engineered natural ventilation system to keep CO2 levels low, or at least 50% of workstations are supplied in the breathing zone with an airspeed under 50 fpm at user's head receives 2 points. Displacement ventilation system implement or air diffusers located 2.8 m above the floor receives additional 1 points. [ 3 ] A07 Operable Windows, providing openable windows to access to outdoor air, available to access outdoor air by 75% of regularly occupied areas or 4% of the occupied areas of there are many floors receives 1 point, another 1 point, if there is outdoor air quality for PM2.5, temperature, relative humidity display near windows. A08 Air quality monitoring and Awareness topic, installing air quality sensor and submitting data to WELL provides 1 point and display screens available in the building to promote awareness provides another 1 point. A09 Pollution Infiltration Management topic, entrance way design such as 3 meters' air door and to slow movement of air from outdoor to indoor by building vestibule or revolving doors , air curtain and management by wet cleaning once a week, vacuuming once a day provides 1 point and envelope of building that is designed to mitigate outside air pollution provides another 1 point. A10 Combustion Minimization topic, combustion restriction indoor or keeping away from building 3.3 meter and vehicle running limitation of 30 seconds. A11 Source Separation topic by to remove the sources by design separately closed door, negatively pressurized or exhaust fans such as the return air to outdoor of all bathrooms, kitchens, cleaning and chemical storage, high-volume printers and copiers provides 1 point. A12 Air Filtration topic by using media filters in ventilation system or standalone device by appropriate efficiency to filter outdoor air , the higher PM2.5, the higher efficiency from 35% to 95% and keep replacing annually provides 1 point. A13 Enhanced Supply Air topic by ventilating occupied spaces with all outdoor air or cleaning devices such as activated carbon filter with efficiency of 75% or media filter or Ultraviolet germicidal irradiation (UVGI) under UL 2998 Zero Ozone Emissions Validation or Intertek Zero Ozone Verification provides 1 point. A14 Microbe and Mold Control topic by implement ultraviolet radiation system for HVAC coil each provides 1 point. [ 3 ] WELL Water aims to increase the rate of adequate hydration in building and reduce risks due to contaminated water . Human's body is two-thirds water, recommended water intake around 2 - 3.7 litres per day to let respiration , perspiration and excretion works. Water with high nitrate can impair oxygen transport in infants, causing neurodevelopment impair. Trihalomethane (THMs) and haloacetic acids (HAAs) in water can cause cancer. Legionella control is needed in cooling systems and hot tubs. Materials must not support mold growth . Well bathroom design, better hand washing leads to reduce risks enteric and respiratory diseases. For W01 Water Quality Indicators, WELL conducts by performance test to limit turbidity of water under 1.0 Nephelometric Turbidity Unit (NTU) or Formazin Turbidity Unit (FTU) or Formazin Nephelometric Units (FNU). Water sample of any 100 ml shall have zero coliform bacteria . [ 3 ] For W02 Drinking Water Quality, the project requires drinking water that limits chemical contamination of arsenic , cadmium , chromium , copper , fluoride , lead , mercury , nickel , nitrate , nitrite , chlorine , trihalomethane , and haloacetic acids . Pesticide contamination has to be controlled such as aldrin and dieldrin , atrazine , carbofuran , chlordane , 2,4-Dichlorophenoxyacetic acid , DDT , lindane , pentachlorophenol . Organic contaminants are also limit, such as benzene , benzo(a)pyrene , carbon tetrachloride , 1,2-Dichloroethane , tetrachloroethene , toluene , trichloroethylene , 2,4,6-Tribromophenol , vinyl chloride , and xylene . For W03, Basic Water Management, annually monitoring is required such as turbidity, pH , residual free chlorine. Legionella management must be determined in the building. For W04, Enhanced Water Quality, enhancing threshold of drinking water contamination level provides 1 point. W05, Drinking Water Quality Management provides total 3 points. For 2 points, the project needs pre-testing water quality on the farthest water dispenser for every 10 floors such as turbidity , coliform bacteria , pH , total dissolved solids (TDS), chlorine , residual (free) chlorine, arsenic , lead , copper , nitrate , benzene at least one month before Performance Verification and monitoring piped water that delivers drinking water, testing water from dispensers quarterly. Turbidity must be equal or less than 1.0 NTU. pH must be between 6.5 and 9.0. TDS must be equal or less than 500 mg/L. Total Chlorine must be equal or less than 5 mg. L. Residual (free) chlorine must be equal or less than 5 mg/L. Total Coliforms must not be detected in a 100 ml sample. Lead must be equal or less than 1 micro gram/L. Sampling frequency can be reduced to once a year, if the results are under limit two consecutive times. Copper must be equal or less than 2 mg/L. Sampling frequency can be reduced to two a year. If the results are under limit four consecutive times, then there is no need to monitor anymore. The test result must be submitted to WELL annually. Last part is display of water management information to promote drinking water transparency provides another 1 point. For W06, Drinking Water Promotion, encouraging people to drink water easily by provide water dispenser minimum one per floor within 30 meter of all users and in all dining areas, designing for water bottle-refilling with maintenance provides 1 point. W07, Moisture Management provides total 3 points, to limit the bacteria and mold growth, the project needs building envelope that incorporates site drainage and storm water management , air tightness testing, vapor pressure differentials, entryway strategies to minimize water permeation, drainage plane from interior to exterior cladding, limiting wicking in porous materials by using non-porous materials such as closed-cell foams , waterproofing membranes, metal between porous meterials or free-draining spaces, provides 1 point. Using protection or implementing measures to eliminate condensation on cold surfaces such as basement, slab-on-grade floor and liquid water as interior housewrap such as basement, bathroom, kitchen provides another 1 point. Label or manual at point-of-connection to shut-off pipe and all water treatment devices need backflow prevention system such as air gap or backflow preventer valve. The last 1 point is to manage moisture by scheduling inspection, notification system in the building and to submit all inspection result to WELL annually. W08 Hygiene Support, the highest points in the Water subject, total 4 points. Part 1, 1 point, bathroom needs trash receptacle in toilet stall if toilet paper cannot be flushed, free of 50% subsidized sanitary pads , storage support in toilet stalls, at least one bathroom has wheelchair bathroom, one bathroom per project for infant changing table, syringe drop box, single-user bathroom needs occupancy signage, self-primed liquid-seal trap in floor drain. For public projects such as airport, family bathroom are required, containing changing table for infant, child size toilet and sinks, motion sensor for lights, slip-resistant floor, grab bars , hook or shelf for bags in toilet stall, wheelchair accessibility by local code. Part 2, 1 point, bathroom needs hands-free flushing toilet, contactless soap dispenser and hand-drying, hands-free exiting door, sensor-activated, programmable line-purge faucet . Part 3, 1 point, faucet design avoiding flowing directly into the drain, splash limited inside the sink, minimum width of 23 cm sink size, flowing column of water minimum 20 cm and at least 7.5 cm away from sink edge. Part 4, 1 point, handwashing supply must has fragrance -free liquid soap through sealed dispensers, paper towels or hand dryers with HEPA filter or fabric towel rolls and signage with proper hand washing steps. [ 3 ] WELL Nourishment concept supports healthy and sustainable eating patterns by accessing to fruits and vegetables more and limiting highly processed foods, nudging users to choose better choices. Poor nutrition responses to one in five people death globally. Unhealthy eating is worse than combining of drug, alcohol, and tobacco. Diets usually is low in fruits, vegetables, whole grains, nuts, seeds, but instead flood with highly processed foods such as refined sugars and oils. In 2019, the EAT-Lancet Commission developed best food option. By developing environment conditions that influence users to change diets with holistic approach, supportive policies. N01 Fruits and Vegetables, WELL ensures to operate food outlet for at least two varieties of fruits and non-fried vegetables that would be clearly be seen by users. Each food outlets are at least 50% of food options are fruits or non-fried vegetables. N02, Nutritional Transparency, other type of foods such as packaged food and beverage must display the total calories per serving, macronutrient , and sugar content. Owner must communicates users on food allergy . High sugar foods or over than 25 grams of sugar per serving is banned from the menu in dining spaces or at least identified in the items for users to make a decision. [ 3 ] N03 Refined Ingredients, restriction of sugars by limit beverage under 25 g of sugar per container, at least 25% of beverages contain no sugar, and non-beverage food except fruit contain sugar under 25 gram per serving receives 1 point. Whole grains promotion by 50% of grain-based food contains whole grain the most receives another 1 point. N04 Food Advertising , eliminating sugar drink , deep-fried food advertising, and sale area promotes water, fruits and vegetables consumption instead, receives 1 point. N05 Artificial Ingredients , phasing out or restrict of artificial ingredients such as colorings , sweeteners , food preservation , fats and oils by clearly labeled on packaging or signage receives 1 point. N06 Portion Sizes , dining space promoting healthy size of food receives 1 point by limiting portion below 650 kCal or over 650 kcal menu under 50%, and dish or bowl size must not be over certain area or volume. Primary school students, secondary school students, and adults plate size are no more than 20, 25, and 30 cm diameter. N07 Nutrition Education , providing nutritional knowledge by cooking demonstration or dietary education learning by nutritionist or gardening workshop receives 1 point. N08 Mindful Eating provides 2 points by dedicating eating space within 200 meters walking distance of the boundary of the project that contains tables and chairs to 25% of users at peak, protects from outdoor climate hazard , provides variety of seating options from small to large group of people, and if there is employees or students then they would provides daily 30 minutes break for meal. N09 Special Diets, alternative food providing to food allergen person, total 2 points. For part 1, 1 point, providing food at least does not include peanut and tree nuts or gluten and wheat or soy or sesame or animal products including seafood , dairy , and eggs . For part 2, 1 point, clearly labeling on packaging, menus, signage that the food contains peanut, fish , shellfish , soy, milk, egg, wheat, tree nuts, sesame, gluten. N10 Food Preparation, providing space for user's meal such as cold storage, countertop, sink for dish and hand washing, microwave or toaster , reusable plates, garbage bin receives 1 point. N11 Responsible Food Sourcing, sourcing 50% foods that are certified organic and 25% animal product lines that are certified organic by Gran Sasso Science Institute (GSSI) or Seafood Certification Scheme by labeling for sustainable receives 1 point. N12 Food Production receives 2 points by providing garden or greenhouse with food-bearing plants or edible landscape or hydroponic or aeroponic farming which is accessible to users in regular hours and growing area both horizontal and vertical at least 0.09 sq. m per user or 0.05 sq. m per student, except hydroponic and aeroponic farming can be half of normal system, and users can access to food growing tools, in which the area is within 400 meter walking distance. N13 Local Food Environment, supporting local food by locating the project near fruit and vegetable supermarket or farmers' market that open at least once a week within 400 meters walk distance, or serving local agriculture program or hosting weekly sale of fruits and vegetables receives 1 point. WELL Lighting concept aims to create lighting that reduces circadian disruption to improve sleep quality and mood and productivity. Humans are diurnal driving by circadian systems or internal clock. Disruption of it leads to obesity, diabetes, depression, breast cancer, metabolic and sleep disorders. Insufficient light can lead to disruption. L01 Light Exposure, daylighting design hugely integrates in a project by daylight simulation such as the Spatial Daylight Analysis that shows how much daylight illuminates throughout working hours . Adequate daylighting level can be decided on the interior layout or the building design such as a distance from windows. For the project that finds it difficult for daylight access, circadian lighting design can replace daylight such as an Intrinsically photosensitive retinal ganglion cell (ipRGC) receiving enough light at least 150 Melanopic Lux (EML) [ 3 ] L02 Visual Lighting Design, WELL still keeps visual lighting design which is conventional lighting method for user's visual comfort and acuity . Lighting design standard in WELL follows Illuminating Engineering Society Lighting Library or EN standard 12464-1&2 or ISO 8995-1 or Chinese Standard GB 50034 or CIBSE SLL Code for Lighting. Alternately WELL allows light level threshold from U.S. General Services Administration 's facilities standards. [ 3 ] L03 Circadian Lighting Design, if the project chose precondition part of circadian lighting, the project receives 1 point automatically. For the project that achieves at least 275 Melanopic Lux (EML), the project receives more 2 points. L04 Electric Light Glare Control, limiting glare from indoor artificial light, receiving 2 points, by using 100% upward lighting or fixture classified Unified Glare Rating (UGR) equal or under 16 or all fixtures does not exceed 6,000 candela /sq.m at angle between 45 and 90 degrees from bottom, it can be done in lighting calculation software that results in UGR equal or under 16. L05 Daylight Design Strategies, providing daylight exposure indoors through design strategies. Daylight plan for the project receives 1 point for workstations near the window within 7.5 meters, but if positioned within 5.5 meters, it receives 2 points instead. Integrating of solar shading system with manual control receives 1 point, or with automated control system throughout year receives 2 points instead. L06 Daylight Simulation, daylight calculation that results equal or higher than 55% of project occupancy area, receiving 300 lux more than 50% of annual time of use, is received 1 point. If it results equal or higher than 75%, then it will be received 2 points. L07 Visual Balance receives 1 point by either design by at least three of five parameters of luminance ratio of horizontal and vertical plane at maximum 10 times, 0.4 ratio of minimum illuminance and average illuminance on horizontal task plane, using automation system by changing light characters at least light levels over a period at least 10 minutes, using consistent Correlated color temperature (CCT) plus and minus 200 Kelvin , or the project is designed by lighting professional that takes those ratios in account. L08 Electric Light Quality, for quality of light fixtures , total 3 points, all light fixture, except decorative lights and emergency lights , that color rendering is equal or over color rendering index (CRI) of 90 or CRI of 80 with R9 equal or over than 50 or TM-30 of color rendering fidelity (Rf) equal or more than 78 and color rendering gamut (Rg) equal or more than 100 with Rcs,h1 from -1% to 15%, will receive 1 point, and that flicker of luminaires classified as "reduced flicker operation" or defined 1, 2, 3 recommended practices by IEEE standard 1789-2015 LED or short term light flicker (Pst LM) and Stroboscopic Visibility Measure (SVM) equal or under 1.0 and 0.6 per NEMA 77-2017, will receive 2 points. L09 Occupant Lighting Control , providing individual control of light for one per 60 sq.m. or one per 10 occupants will be received 1 point, but if there are one per 30 sq.m. or one per 5 occupants, then it will be received 2 points, with the project has lighting control in each zone was setup at least three lighting levels, able to change group of lights with different light beams or color or CCT, all users be able to control light manually by keypad or digital interface, and separately control of lighting for presentation. Task light provided with no cost to employees and light levels and direction can be controlled by users independently under shielded light source receives 1 point. To promote physical activity by creating opportunities through spaces is a substantial impact to decrease risk of death by 10% and 25%, which more than half a million and one million people globally would not have died because of inactivity. Global lifespan could increase by 0.68 years. Physical inactivity leads to pre-mature death and chronic illness , type II diabetes , cardiovascular disease , depression , stroke , dementia and cancer . 23% of adult globally are inactive caused by rising urbanization and economic development. Sitting or sedentary globally average 3–9 hours daily among adults that linked to mentioned illnesses. V01 Active Buildings and Communities, the summary of optimization points which WELL requires the project to achieve as least one point from four optimization features specifically V03 Circulation Network for visible, open to access and aesthetic stair circulation, V04 Facilities for Active Occupants such as cycling network with bike parking or showers, lockers and changing rooms, V05 Site Planning and Selection such as pedestrian-friendly environment or mass transit within walking distance, V08 Physical Activity Spaces and Equipment such as free sport opportunities and facilities or green space for outdoor activities. [ 3 ] V02, Ergonomic Workstation Design, intended for users to adjust furniture freely such as monitor position, work surface height, chair, standing desk and foot support including user's orientation or instruction. [ 3 ] V03 Circulation Network provides total 3 points, designing aesthetic staircase by music, artwork, designed to have light levels of at least 215 lux in use, windows or skylights that provide daylight or nature views, natural design elements, gamification receives 1 point. Integrating point-of-decision signage on stair area such as motivational element receives 1 point. Promoting visible stair that close to the entrance receives 1 point. V04 Facilities for Active Occupants, providing cycling network and parking for bikes with shortterm and longterm with basic bike maintenance tools and minimum Bike Score® of 50, 200 m walk distance of an existing cycling network receives 2 points, and with showers, lockers, and changing facilities 16 places plus one for every 1,000 occupants receives another 1 point. V05 Site Planning and Selection, site planning for walking and connection to public transportation minimum Walk Score® of 70 or within a 400 m distance of the project boundary, by selecting sites with friendly streets and friendly footpath receives 2 points, and with walkable distance to mass transit receives another 2 points. V06 Physical Activity Opportunities, providing physical activity for occupants at no cost, by qualified professional, not in form of punishment by at least one 30-minute event per week or one 60-minute per week for school students receives 1 point, and for not less than total 150 minutes per week or not less 60 minutes per day for school students receive 2 points. V07 Active Furnishings, providing at least manual or electric work surface or treadmill or stationary bicycle or step machine , and with 50% ot 90% of workstations receives 1 point ot 2 points in this active furnishing. V08 Physical Activity Spaces and Equipment, providing indoor fitness space at no cost receives 2 points either the space includes two types of exercises or equipment that allow use by at least 5% of occupants at any time or has minimum size of 25 sq. m plus 0.1 sq. m per occupant until 930 sq. m, but WELL also allows if the project gives free pass to access fitness facility within 200 meters walking distance instead. V09 Physical Activity Promotion receives only 1 point, if the project offer at least two of five activities to employees such as prize from sport competition , subsidy for employee sport member, reducing of health care cost, flexible work hours , paid time off at least four days per year physical activity, and either utilization of incentive program for 50% or improvement at least 10%, and if students are in account, it needs program to reduce TV viewing, computer or smartphone use, gaming, and to teaching physical activity or movement or physical activity breaks. V10 Self-Monitoring Support, receive only 1 point by providing free fitness tracker at no cost or at least 50%, measuring at least two physical activities and at least one additional activity such as sleep or mindfulness . Holistic approach and intervention to help design building that focus on individual thermal discomfort , which is subjective under the same conditions. One-size-fits all does not suit for large people. Personal thermal control should be used to improve productivity and decrease sick building syndrome . Due to large number of people, thermal comfort conditions should create baseline satisfaction for the largest number of people. It is greatly influencing users and one of the biggest impact to motivation, alertness, focus and mood. It should provide acceptable thermal environment to minimum 80% of users, in fact only 11% met accepted human satisfaction in the U.S., and 41% were dissatisfaction which detrimental to business. Employees perform 15% poorer in hot conditions and 14% poorer in cold conditions. The first part, WELL ensures thermal indoor environment to be controlled. For HVAC control system (mechanically conditioned space), acceptable thermal comfort by PMV/PPD model must be in between -0.5 and 0.5 over 90% of regularly occupied spaces. For naturally conditioned system, the minimum prevailing mean outdoor temperature (tpma (out)), calculated from average outdoor temperature entire day, must be 10 degrees Celsius and indoor temperature of 31% of the tpma (out) plus 14.3 degrees Celsius. The maximum tpma (out) must be under 33.5 degrees Celsius and indoor temperature of 31% of the tpma (out) plus 21.3 degrees Celsius. For example, the tpma (out) 33.5 degrees Celsius, the indoor temperature shall not be over 31.7 degrees Celsius. If tpma (out) is over than 33.5 degrees Celsius then a mechanically conditioned space would be in place. WELL allows the project to use optimization points from T06 Thermal Comfort Monitoring with dry-bulb temperature data or it can be only thermal comfort surveys by achieving 2 points from T02 Verified Thermal Comfort, 80% satisfaction survey of thermal comfort. The second part, semi-annual testing on summer and winter for dry-bulb temperature and relative humidity , air speed , and mean radiant temperature or it can only achieve T06 Thermal Comfort Monitoring feature. T02 Verified Thermal Comfort, if surveying from user results in 80% satisfaction receives 2 points, and 90% satisfaction receives 3 points, with responses significant 35% of 45 users or 15 from 20 users or 80% of 20 users with sample template. T03 Thermal Zoning, providing thermostat control point one per 60 sq. m and per 30 sq. m or one per 10 users and 5 users would receives 1 and 2 points, by presenting with digital interface on computer or phone, with temperature sensors apart from exterior wall, windows, doors, direct sunlight, air supply diffusers, mechanical fans, heaters, other significant sources of heat or cold. T04 Individual Thermal Control, giving personal thermal control for both heating and cooling receives each 1 point, for cooling, implementing user-adjustable thermostat , desk fan or ceiling fan, chair with mechanical cooling system, other solution that affecting PMV change of -0.5 within 15 minutes without changing the PMV for other occupants, for heating, implementing user-adjustable thermostat, electric parabolic space heater, electric heated chair or foot warmers, personal or shared blankets, other solution that affecting PMV change of -0.5 within 15 minutes without changing the PMV for other occupants, thus allowing flexible dress code receives another 1 point. T05 Radiant Thermal Comfort , radiant thermal comfort management for both heating and cooling receives each 1 point by implement radiant ceilings, walls, floors, radiant panels at least 50% of regularly occupied project area. T06 Thermal Comfort Monitoring such as dry-bulb temperature and relative humidity receives 1 point with display screen and website or mobile application every 500 sq. m of regularly occupied space, and submitting data annually to WELL with proof of calibration. T07 Humidity Control, control of humidity receives 1 point by having mechanical system that can maintain relative humidity 30% to 60% at all times or submitting document that modeled relative humidity levels in space from 30% to 60% for at least 98% of all business hours or by meeting thermal comfort monitoring (T06) with relative humidity levels from 30% to 60% except high-humidity spaces. Providing holistic approach to address acoustical comfort of a space, measuring satisfaction of users by human response to mechanical vibrations through a medium, such as air. Sleep disturbance , hypertension , reduction of mental arithmetic skills in children are caused by exterior noise and HVAC, appliances noises . Myocardial infarction risk can be cause by traffic noise at night. Bad reverberation times and background sound levels can affect speech intelligibility in educational areas where aural comprehension is vital for memory retention . The planning and commissioning of an isolated and balanced HVAC system is firm baseline for anticipated background noise level. Adding facade elements such as mass and glazing, sealing gaps and providing airspace between enclosed spaces can increase occupant comfort. Replacing hard surfaces with absorptive materials improve speech projection and acoustical privacy. Consistent background sound levels is provided by sound masking system, increase the signal-to-noise ratio to increase privacy. For labeling acoustic zone, interior design must have a design of acoustics zoning label such as loud zone, quiet zone, mixed zone, and circulation zone to mitigate a sound transmission from loud zones to quiet zones. For providing acoustic design plan, the design concept shall incorporate acoustical comfort, background noise , speech privacy, and reverberation time and/or impact noise within the project boundary, or it can be an acoustical engineering professional in acoustic to evaluate existing sounds and to recommend solutions and measurements. [ 3 ] S02 Maximum Noise Levels, limiting background noise levels over period of five minutes and average sound pressure levels do not exceed tier 1, receiving 1 point, that average Sound pressure level (SPL) for category 1 to 4, 40 dBA to 55 dBA and 60 dBC to 75 dBC, and maximum SPL from 50 to 65 dBA or 70 to 85 dBC. For tier 2, receiving 3 points instead, that average SPL for category 1 to 4, 35 to 50 dBA and 55 dBC to 70 dBC, and maximum SPL from 45 to 60 dBA or 65 to 80 dBC. S03 Sound Barriers, sound barriers that designed for sound isolation at walls and doors meets Sound transmission class (STC) or Weighted Sound Reduction (Rw) values from minimum 40 STC to 60 STC, and doors with minimum 30 STC would receives 1 point. Achieving minimum Noise Isolation Class (NIC) or Weight Difference Level (Dw) for each wall type from minimum 35 NIC to 55 NIC, or the sum of NIC or Dw combined with Noise Criteria Rating (NC) or Weighted Sound Pressure Level (LAeq) with minimum 70 to 85 NIC + NC or Dw + LAeq would receive 2 points. S04 Reverberation Time, maintain persistence of voices in the room receives 2 points, for example, area for music rehearsal for maximum 1.1 seconds, area for learning for maximum 0.6 seconds that could be verified by technical document or performance test. S05 Sound Reducing Surfaces, furnishing quality surfaces of room that meets criteria of tier 1 or 2, receiving 1 or 2 points, for example, open workspaces, minimum noise reduction coefficient (NRC) OR Alpha-w of 0.75 or 0.90 and minimum furniture height and NRC OR Alpha-w with minimum height of 1.2 m above finished floor and minimum 0.70, cumulatively at least 10% of occupied project area. S06 Minimum Background Sound, artificial background sound by making sound masking system receives 1 point to increase privacy that produce 1/3 octave band output signal and minimum frequency spectrum of 100 Hz to 5 Hz. Providing enhanced speech reduction automatically receives 1 point by achieving 2 points from S03 or S05, and S06 part 1. Promoting the use of low-hazard cleaning products and practices that reduce health impact, mitigation of contamination and public health, management of waste, low-hazard pesticides. For some toxic, prone to bioaccumulation materials carrying old chemicals such as lead that accounted of a one million deaths in 2017. CCA in wood structures can leach arsenic in soil where children can be exposed. New materials such as perfluorinated alkyl compounds (PFCs), orthophthalates , some heavy metals and halogenated flame retardants (HFRs) are superior but cause negative health impact. Volatile organic compound (VOCs) are so common in insulation , paints, coatings, adhesives, furniture and furnishings, composite wood products and flooring that cause respiratory issue and cancer risks. Two solutions are to increase knowledge of materials and to promote assessment and selection to minimize impacts. For X01 Material Restrictions, Asbestos level in newly installed products is limit under 1,000 ppm by weight or area. Mercury content for fluorescent lamp and sodium-vapor lamp is limit to 2.5 mg to 32 mg or passed the Restriction of Hazardous Substances Directive (RoHS). For fire alarms, meters, sensors, relays, thermostats, and load break switches is limit to no more than 1000 ppm of mercury by weight and 100 ppm of lead or certified the RoHS. For newly installed paints, lead content shall not be over than 100 ppm by weight and certified by the Living Building Challenge 's Red List or the Cradle-to-cradle design list or Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) and Candidate List of Substances of Very High Concern (SHVC) lists. Drinking water pipes are limit to 0.25% of lead, or labeled as American National Standards Institute (ANSI) or NSF . [ 3 ] For X02 Interior Hazardous Materials Management, if the building was constructed before the enactment of asbestos banning law , an inspector must qualifies for asbestos containing materials (ACM) and performs polarized light microscopy (PLM) or transmission electron microscopy (TEM). ACM must be removed from the project if it was found. Same as an asbestos management, removing process needs to be done if lead containing materials were found in paints. Polychlorinated biphenyl (PCB) is restricted especially in a caulk , it needs to be removed. [ 3 ] For X03 CCA and Lead Management, outdoor materials such as chromated copper arsenate (CCA) is banned. Lead hazard over the limit in bare soil, turf, artificial turf, recycled tire, and paint shall be examined and be removed or replaced. [ 3 ] X04 Site Remediation , assess and mitigate site hazards by remediation process of a risk-based approach to sustainable remediation receives 1 point. X05 Enhanced Material Restrictions, material of at least 50% of furniture, millwork , fixture is limited 100 Parts-per notation (ppm) of halogenated flame retardants (HFR), polyfluoroalkyl substances (PFAS), lead , cadmium , mercury , or do not contain textiles and plastic at all, and all electrical products have RoHS restrictions would be awarded 1 point. For flooring products are limited 100 ppm of HFR, PFAS, orthophthalates , and insulation products are limited 100 ppm of HFR, and ceiling and wall panels are limited 100 ppm of HFR and orthophthalates, and pipe and fittings are limited 100 ppm of orthophthalates would receive 1 point. X06 VOC Restrictions, Volatile organic compound restriction from wet-applied products and furniture, architectural and interior products receives 4 points. X07 Materials Transparency, selecting products with disclosed ingredients, enhanced ingredient disclosure, and third-party verified ingredients receives 1 point for each, total 3 points. X08 Materials Optimization, at least 25 distinct products having ingredients inventoried to 100 ppm and free of compound listed in the Living Building Challenge 's Red List or the Cradle-to-cradle design list or REACH Restriction and Substance of very high concern (SVHC) lists, or products purchased for future repair would receive 1 point. For optimized products, at least 15 distinct products are certified under the Cradle to Cradle Certified, the Living Product Challenge, and the Global GreenTag . X09 Waste Management , implementing a waste management plan by identification of roles, sources, protocols to clean and track wastes receives 1 point. X10 Pest Management and Pesticide Use, implementing integrated pest management (IPM) for indoor and outdoor space receives 1 point. X11 Cleaning Products and Protocols, developing cleaning plan receives 2 points. Mental health issue is estimated 14.3% of deaths worldwide, 8 million death per year, including substance use affect 13% of the global burden of disease and 32% of years lived with disability. Alcohol and drug use is significantly premature death, alcohol alone affect 3.3 million deaths and 5% global burden of disease. Depression and anxiety ranking first and sixth place global burden of disease, with depression accounting for 4% global burden of disease and caused the largest disability worldwide. Global economy lost from depression and anxiety 1 trillion dollars due to lost productivity, 18% of adults experience the condition, over 30% of adults will experience it during their lifetime, despite that, spending to fight the issue were less than 2 dollars per person. High income and low income country, people experience the issue 35-50% and 76-85% without treatment which causes suicide more than 800,000 deaths per years worldwide. Even that, people with the issue have mortality rate 2.2 times higher than normal and a median loss of 10 years life. The issue could be mitigated by policies, programs, design in workplace promotion, prevention, interventions. Reducing stigma, promoting positive work environments, stress management programs, and strategies, substance services and treatment, optimal sleep, increasing nature contact can improve overall mental health. For M01 Mental Health Promotion, WELL ensures that users receive at least two from five options such as education on mental health quarterly, trainings annually, mindfulness program weekly, healthy working hours , a space for relaxation . The project also sends users some form of communication such as annual communication and onboarding to address mental health and well-being benefits with resources. For M02 Nature and Place, common spaces, rooms, and circulation routes must integrate natural elements related subjects such as natural shape material, plants, water, nature views. The project must designed to provide a celebration of culture and social cognition , celebration of place, integration of art , and human delight that connect to place, by Living Building Challenge 4.0, Core Imperative 19 - Beauty + Biophilia hypothesis topic. [ 3 ] M03 Mental Health Services, offering mental health screening for depression and substance use , providing licensed mental health professional , guidance process for next step for 1 point. Mental health services such as clinical screening, inpatient treatment, outpatient treatment, prescription medication at no cost or subsidy, information on benefits coverage, consultation receives 1 point. Supporting sick leave , shortterm and longterm leave, interpersonal support, adjustment work schedule, adjustment of workplace to quieter area receives 1 point. Mental health recovery by trauma-focus psychotherapy , psychological first aid (PFA), bereavement counseling, information on benefits coverage to additional services receives 1 point. M04 Mental Health Education, mental health education for regular occupants by managing personal mental health at work, providing in-person or virtually education receives 1 point, and health education for managers by reducing workplace stress, burnout, motivation receives additional 1 point. M05 Stress Management, develop stress management plan by assessing overwork by more than 48 hours per week, absenteeism, not using paid time off, performance, turnover rates, survey results, improvement of employee stress to change of organization, employee participation receives 2 points. M06 Restorative Opportunities, supporting healthy working hours by provide minimum 11 consecutive resting hours per day, 24 consecutive hours off per week, 48 hours for those who in shift work, and for eligible employees, minimum 20 days paid time off per year, no work during time off, sick to vacation clearly defined, accrual policy is defined, and for school, do not start earlier than 8:30 am receives 1 point, and additional 1 point for nap allowing and good acoustically separated nap area for 1% of eligible employee at least 30 minutes. M07 Restorative Spaces, providing break space for users to restorative and encourage relief from fatigue with minimum 7 sqm plus 0.1 sqm per regular users or 186 sqm, with calming adjustable lighting, sound intervention of natural features, thermal control, movable lightweight chars, cushions, natural elements, subdued colors, visual privacy including signage explaining its purpose receives 1 point. M08 Restorative Programming, for restorative programming such as mindfulness training course, yoga, digital mindfulness offering receives 1 point. M09 Enhanced Access to Nature, floor plan that at least 75% of workstations have a sight to natural elements winthin 10 meters receives 1 point, and providing outdoor nature access and to one green space or blue space within 200 meter walk distance from boundary, and total green space at least 0.5 hectare receives additional 1 point. M10 Tobacco Cessation, providing resources and motivation, rewarding, counseling, tobacco prescription for tobacco quitting and limiting of tobacco receives 3 points. M11 Substance Use Services, education of drug use receives 1 point, and additional 1 point for clinical services. Estimated 235 million urban families live in low standard housing, leading to poor health outcomes like asthma, infectious disease and cardiovascular. Only 55% of U.S. companies see diversity as a priority, in UK, women earns 80.2% of men's. Spaces usually are not design for diverse needs. Surveying can bring more returns on investment. Fostering civic engagement and espouse can increase employee rentiontion and attraction and financial returns. Design plays critical role to let all users access. C01 Health and well-being promotion by provides WELL feature guide and regularly communicates to the users. C02 Integrative design by incorporating all stakeholders to set the health and well-being goals for the project. C03 Emergency preparedness by implementing emergency management planning and post-occupancy evaluation are also required. C04 Occupant survey by implement survey program to users. [ 3 ] C05 Enhanced Occupant Survey , total 4 points by using additional survey and analysis for 1 point, using comparison pre and post survey for 1 point, implementing aspirational satisfaction and unmet satisfaction plan for 1 point, focus group activity by interview and evaluate result for 1 point. C06 Health Services and Benefits, providing employees with a health benefits policy at no cost or subsidized, offering on-demand health services, offering sick leave , supporting vaccinate program receives each 1 point, total 4 points. C07 Enhanced Health and Well-Being Promotion, promoting culture of health by communications and promotion group receives 1 point. Having at least one dedicated executive-level employee to plan and promote heathy activities receives 1 point. C08 New Parent Support, offering new parent leave support receives 1, 2, 3 points for paid leave at least 12, 18, 30 weeks. For breastfeeding support, such as break time and giving insulated cooler or refrigerator access receives 1 point. C09 New Mother Support, providing lactation room at least 2.1 x 2.1 m with appropriated elements in the building receives 2 points. C10 Family Support, childcare support services, offering family leave, offering bereavement support receives each 1 point, total 3 points. C11 Civic Engagement, promoting community engagement and community space for employees receives each 1 point, total 2 points. C12 Diversity and Inclusion by creating Diversity, equity, and inclusion (DEI) assessment and action plan, DEI system, DEI hiring practives each receives 1 point, total 3 points. C13 Accessibility and Universal Design , implementing universal design receives 2 points. C14 Emergency Resources, providing emergency information or procedures, building notification system, Automated external defibrillator (AEDs), first aid kit, and emergency training for medical emergencies and security team or Cardiopulmonary resuscitation (CPR), first aid, AED usage trainings receives 1 point. Providing opioid pain relieve kit for emergency such as naloxone receives 1 point. This category has no requirement but it can provide additional points such as new intervention for maximum 10 topics, each receives 1 point. If one member of the project team has WELL AP, it automatically receives 1 point. Offering WELL educational tours at least six times per year receives 1 point. If the project commits in any well-being or health program that approved by IWBI and it was completed within three years, it receives 1 point. If the project achieve any green building certification that approved by IWBI, it receives 5 points of rating. Scientific research provides information about the standard’s performance. Existing research focuses on the evaluation of indoor environmental quality (IEQ) parameters. The certification requires post-occupancy evaluation, which allows occupants to provide feedback to building owners and management on these IEQ parameters. For buildings with 10 or more occupants, the Occupant Indoor Environmental Quality (IEQ) Survey from the Center for the Built Environment at UC Berkeley (or an approved alternative) is completed by a representative sample of at least 30% of occupants at least once per year. The survey covers the following topics: acoustics , thermal comfort , furnishings, workspace light levels and quality, odors and air quality, cleanliness and maintenance, and layout. [ 4 ] In 2020, researchers analyzed 1,121 post-occupancy evaluation surveys conducted in nine offices, two WELL-certified and seven not WELL-certified. [ 5 ] Results of the study were mixed, with higher occupant satisfaction in the WELL-certified buildings for spatial comfort, thermal comfort, noise and privacy, personal control, and workspace comfort, but lower satisfaction for visual comfort and connection to the outside in comparison with non-WELL certified buildings. [ 5 ] These findings may be attributable to the types of non-WELL certified buildings used in the comparison, as they may already be high-performance buildings in other regards which do not necessarily satisfy all of the WELL certification’s criteria. [ 5 ] In 2021, another study on surveys compared the results of three rounds of occupant IEQ satisfaction surveys reported by three groups of employees who moved from three non-WELL (two BREEAM and one non-certified) to three WELL-certified office buildings. [ 6 ] For two out of the three building pairs, there was a statistically significant increase in building and workspace satisfaction after relocation to WELL buildings. [ 6 ] However, for 55% of certification parameters for the three compared cases, there was an insignificant difference upon relocation. [ 6 ] Results found higher occupant satisfaction for building cleanliness and furniture but no increase in satisfaction with noise and visual comfort. [ 6 ] Another 2021 study investigated indoor air quality (IAQ) before and after relocation to WELL-certified office buildings. [ 7 ] The results indicated there was no significant concentration difference for the majority of measured air pollutants between non-WELL and WELL buildings. [ 7 ] In 2022, researchers conducted a pre- versus post-occupancy evaluation of approximately 1,300 workers transitioning to WELL-certified offices from non-WELL certified offices. [ 8 ] Using pre- and post-occupancy surveys, overall satisfaction rates improved from 42% (pre-occupancy) to 70% (post-occupancy) across all parameters. [ 8 ] The largest increases in satisfaction were for cleanliness and access to nature, while occupants were most satisfied with maintenance and lighting in WELL-certified offices. [ 8 ] In 2023, researchers analyzed 1,403 post-occupancy evaluation surveys from 14 open-plan offices (10 of which were WELL-certified and four of which were uncertified) in Australia, New Zealand, and Hong Kong. [ 9 ] The five offices that achieved the highest satisfaction in interior design, indoor air quality, privacy and connection to the outdoor environment were WELL-certified. [ 9 ] No significant differences in health were found between WELL-certified and non-WELL certified buildings as quantified by questions about physical and mental health presented in the post-occupancy evaluation surveys. [ 9 ] In 2024, researchers used a statistical matching approach to compare occupant satisfaction from 3,268 surveys from 20 WELL-certified and 49 LEED -certified buildings. [ 10 ] Overall building and workplace satisfaction was found to be high in WELL-certified buildings (94% and 87%). [ 10 ] Statistical analysis revealed that there is a 39% higher probability of finding an occupant that is satisfied with the building overall in a WELL-certified building than a LEED-certified building. [ 10 ] Although satisfaction was found to be higher in WELL-certified buildings, satisfaction in LEED-certified buildings with the building, workspace, and most IEQ parameters was still relatively high. [ 10 ] Temperature and sound privacy in LEED-certified buildings are the only parameters with mean satisfaction values less than “neutral” amongst all studied parameters in LEED- and WELL-certified buildings. [ 10 ]
https://en.wikipedia.org/wiki/WELL_Building_Standard
WEMO , a subsidiary of Belkin , are a series of products from Belkin that enable users to control home electronics remotely. The product suite includes electrical plugs , motion sensors , light switches , cameras, light bulbs , and a mobile app . The Wemo Switch can be plugged into any home outlet , which can then be controlled from an iOS or Android smartphone running the Wemo App, via home Wi-Fi or mobile phone network. The Wemo Motion Sensor can be placed anywhere, as long as it can access the same Wi-Fi network as the Wemo devices it is intended to control. It can then turn on and off any of the Wemo devices connected to the Wi‑Fi network as people pass by. The Wemo Insight Switch provides information on power usage and cost estimation for devices plugged into the switch. The Wemo Light Switch is for use where a light is controlled by a single light switch. Multi-way switching is not supported at this time but can be approximated by installing a Wemo Light Switch at each location. The Wemo App controls the Wemo devices from anywhere in the world as long as the Wemo devices' wireless network is connected to the Internet. Wemo devices can also be controlled using IFTTT technology. Wemo devices can also be controlled by voice through the Amazon Echo , [ 1 ] Google Assistant , [ 2 ] and Apple 's Siri (through the use of the Wemo Bridge). [ 3 ] Wemo switches are controlled via IP networks; thus, for a switch to be controllable from a remote location, it must be open to receive connections from the Internet. In January 2013, it was revealed that the Wemo had a security flaw in its UPnP implementation that allowed an unauthorized user to take control of a switch. This could allow malicious attacks, such as flipping the switch at a very fast rate, which could damage certain devices and even cause electrical fires. [ 4 ] This vulnerability has been addressed by updated firmware releases. [ 5 ] This electronics-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/WEMO
The Water Erosion Prediction Project ( WEPP ) model is a physically based erosion simulation model built on the fundamentals of hydrology , plant science , hydraulics , and erosion mechanics. [ 1 ] [ 2 ] The model was developed by an interagency team of scientists to replace the Universal Soil Loss Equation (USLE) and has been widely used in the United States and the world. [ 3 ] WEPP requires four inputs, i.e., climate , topography , soil , and management ( vegetation ); and provides various types of outputs, including water balance ( surface runoff , subsurface flow , and evapotranspiration ), soil detachment and deposition at points along the slope, sediment delivery , and vegetation growth. The WEPP model has been improved continuously since its public delivery in 1995, and is applicable for a variety of areas (e.g., cropland , rangeland , forestry , fisheries , and surface coal mining ). WEPP is applicable for a wide range of geographic and land-use and management conditions, and capable of predicting spatial and temporal distributions of soil detachment and deposition on an event or continuous basis at both small (hillslopes, roads, small parcels) and large ( watershed ) scales. [ 4 ] [ 5 ] [ 6 ] Hillslope applications of the model can simulate a single profile having various distributions of soil , vegetation , and plant/management conditions. In WEPP watershed applications, multiple hillslopes, channels , and impoundments can be linked together, and runoff and sediment yield from the entire catchment predicted. The model has been parameterized for a large number of soils across the U.S. and model performance has been assessed under a wide variety of land-use and management conditions. In addition, WEPP can generate long-term daily climatic data with CLIGEN, an auxiliary stochastic climate generator. [ 7 ] The CLIGEN database contains weather statistics from more than 2,600 weather stations in the United States. The WEPP climate database is supplemented by the PRISM database , [ 8 ] which further refines the climatic data based on longitude , latitude , and elevation . WEPP can provide daily runoff , subsurface flow , and sediment output categorized into five particle-size classes: primary clay , primary silt , small aggregates , large aggregates , and primary sand , allowing calculation of selective sediment transport , and enrichment of the fine sediment sizes. Over the last decade, researchers have made significant improvements to the WEPP model. These include improved algorithms to simulate the effect of hydraulic structures and impoundments on runoff and sediment delivery , [ 9 ] the addition of Penman–Monteith ET algorithms, [ 10 ] subsurface converging lateral flow to represent variable source area runoff , [ 11 ] improved canopy biomass routines for forested applications, [ 12 ] and the incorporation of an alternative, energy-balance -based winter hydrologic routine . [ 13 ] A number of modern graphical user interface programs have also been created, to assist in easier application of WEPP. The main interface for the model is a standalone Windows application (downloadable via: http://www.ars.usda.gov/Research/docs.htm?docid=10621 ), that allows a user to simulate hillslope profiles and small watersheds and have full control over all model inputs (Figure 1). Additionally, web-based interfaces allow rapid use of the model while accessing existing soil , climate , and management databases (Figure 2). A number of geospatial interfaces to WEPP (example in Figure 3) are also available: The U.S. Forest Service has developed a suite of internet interfaces , the Forest Service WEPP (FS WEPP) interfaces , for easier applications by stakeholders in forest and rangeland management (forest engineers, rangeland scientists, federal and state regulatory personnel) and the general public. [ 20 ] The interfaces can be readily accessed and run through the internet ( http://forest.moscowfsl.wsu.edu/fswepp/ ), and do not require any in-depth understanding of the hydrology , hydraulic and erosion principles embedded in the WEPP model. The FS WEPP interfaces include:
https://en.wikipedia.org/wiki/WEPP
Tungsten(VI) fluoride , also known as tungsten hexafluoride , is an inorganic compound with the formula W F 6 . It is a toxic, corrosive, colorless gas, with a density of about 13 kg/m 3 (22 lb/cu yd) (roughly 11 times heavier than air). [ 2 ] [ 3 ] It is the densest known gas under standard ambient temperature and pressure (298 K, 1 atm) and the only well characterized gas under these conditions that contains a transition metal. [ 4 ] [ 5 ] WF 6 is commonly used by the semiconductor industry to form tungsten films, through the process of chemical vapor deposition . This layer is used in a low- resistivity metallic " interconnect ". [ 6 ] It is one of seventeen known binary hexafluorides . The WF 6 molecule is octahedral with the symmetry point group of O h . The W–F bond distances are 183.2 pm . [ 7 ] Between 2.3 and 17 °C , tungsten hexafluoride condenses into a colorless liquid having the density of 3.44 g/cm 3 at 15 °C . [ 8 ] At 2.3 °C it freezes into a white solid having a cubic crystalline structure, the lattice constant of 628 pm and calculated density 3.99 g/cm 3 . At −9 °C this structure transforms into an orthorhombic solid with the lattice constants of a = 960.3 pm, b = 871.3 pm, and c = 504.4 pm, and the density of 4.56 g/cm 3 . In this phase, the W–F distance is 181 pm, and the mean closest molecular contacts are 312 pm . Whereas WF 6 gas is one of the densest gases, with the density exceeding that of the heaviest elemental gas radon (9.73 g/L), the density of WF 6 in the liquid and solid state is rather moderate. [ 9 ] The vapor pressure of WF 6 between −70 and 17 °C can be described by the equation where the P = vapor pressure ( bar ), T = temperature (°C). [ 10 ] [ 11 ] Tungsten hexafluoride was first obtained by conversion of tungsten hexachloride with hydrogen fluoride by Otto Ruff and Fritz Eisner in 1905. [ 12 ] [ 13 ] The compound is now commonly produced by the exothermic reaction of fluorine gas with tungsten powder at a temperature between 350 and 400 °C : [ 8 ] The gaseous product is separated from WOF 4 , a common impurity, by distillation. In a variation on the direct fluorination, the metal is placed in a heated reactor, slightly pressurized to 1.2 to 2.0 psi (8.3 to 13.8 kPa), with a constant flow of WF 6 infused with a small amount of fluorine gas. [ 14 ] The fluorine gas in the above method can be substituted by ClF , ClF 3 or BrF 3 . An alternative procedure for producing tungsten fluoride is to treat tungsten trioxide ( WO 3 ) with HF , BrF 3 or SF 4 . And besides HF, other fluorinating agents can also be used to convert tungsten hexachloride in a way similar to Ruff and Eisner original method: [ 4 ] On contact with water , tungsten hexafluoride gives hydrogen fluoride (HF) and tungsten oxyfluorides, eventually forming tungsten trioxide : [ 4 ] Unlike some other metal fluorides, WF 6 is not a useful fluorinating agent nor is it a powerful oxidant. It can be reduced to the yellow WF 4 . [ 15 ] WF 6 forms a variety of 1:1 and 1:2 adducts with Lewis bases , examples being WF 6 ( S(CH 3 ) 2 ), WF 6 (S(CH 3 ) 2 ) 2 , WF 6 ( P(CH 3 ) 3 ), and WF 6 ( py ) 2 . [ 16 ] The dominant application of tungsten fluoride is in semiconductor industry, where it is widely used for depositing tungsten metal in a chemical vapor deposition (CVD) process. The expansion of the industry in the 1980s and 1990s resulted in the increase of WF 6 consumption, which remains at around 200 tonnes per year worldwide. Tungsten metal is attractive because of its relatively high thermal and chemical stability, as well as low resistivity (5.6 μΩ·cm) and very low electromigration . WF 6 is favored over related compounds, such as WCl 6 or WBr 6 , because of its higher vapor pressure resulting in higher deposition rates. Since 1967, two WF 6 deposition routes have been developed and employed, thermal decomposition and hydrogen reduction. [ 17 ] The required WF 6 gas purity is rather high and varies between 99.98% and 99.9995% depending on the application. [ 4 ] WF 6 molecules have to be split up in the CVD process. The decomposition is usually facilitated by mixing WF 6 with hydrogen, silane , germane , diborane , phosphine , and related hydrogen-containing gases. WF 6 reacts upon contact with a silicon substrate. [ 4 ] The WF 6 decomposition on silicon is temperature-dependent: This dependence is crucial, as twice as much silicon is being consumed at higher temperatures. The deposition occurs selectively on pure silicon only, but not on silicon dioxide or silicon nitride , thus the reaction is highly sensitive to contamination or substrate pre-treatment. The decomposition reaction is fast, but saturates when the tungsten layer thickness reaches 10–15 micrometers . The saturation occurs because the tungsten layer stops diffusion of WF 6 molecules to the Si substrate which is the only catalyst of molecular decomposition in this process. [ 4 ] If the deposition occurs not in an inert atmosphere but in an oxygen-containing atmosphere (air), then instead of tungsten, a tungsten oxide layer is produced. [ 18 ] The deposition process occurs at temperatures between 300 and 800 °C and results in formation of hydrogen fluoride vapors: The crystallinity of the produced tungsten layers can be controlled by altering the WF 6 / H 2 ratio and the substrate temperature: low ratios and temperatures result in (100) oriented tungsten crystallites whereas higher values favor the (111) orientation. Formation of HF is a drawback, as the HF vapor is very aggressive and etches away most materials. Also, the deposited tungsten shows poor adhesion to the silicon dioxide which is the main passivation material in semiconductor electronics. Therefore, SiO 2 has to be covered with an extra buffer layer prior to the tungsten deposition. On the other hand, etching by HF may be beneficial to remove unwanted impurity layers. [ 4 ] The characteristic features of tungsten deposition from the WF 6 / SiH 4 are high speed, good adhesion and layer smoothness. The drawbacks are explosion hazard and high sensitivity of the deposition rate and morphology to the process parameters, such as mixing ratio, substrate temperature, etc. Therefore, silane is commonly used to create a thin tungsten nucleation layer. It is then switched to hydrogen, that slows down the deposition and cleans up the layer. [ 4 ] Deposition from WF 6 / GeH 4 mixture is similar to that of WF 6 / SiH 4 , but the tungsten layer becomes contaminated with relatively (compared to Si) heavy germanium up to concentrations of 10–15%. This increases tungsten resistance from about 5 to 200 μΩ·cm. [ 4 ] WF 6 can be used for the production of tungsten carbide . As a heavy gas, WF 6 can be used as a buffer to control gas reactions. For example, it slows down the chemistry of the Ar / O 2 / H 2 flame and reduces the flame temperature. [ 19 ] Tungsten hexafluoride is an extremely corrosive compound that attacks any tissue. Because of the formation of hydrofluoric acid upon reaction of WF 6 with humidity, WF 6 storage vessels have Teflon gaskets. [ 20 ]
https://en.wikipedia.org/wiki/WF6
10406 67701 ENSG00000101443 ENSMUSG00000017723 Q14508 Q9DAU7 NM_006103 NM_080733 NM_080734 NM_080735 NM_080736 NM_026323 NM_001374655 NP_006094 NP_080599 NP_001361584 WAP four-disulfide core domain protein 2 - also known as Human Epididymis Protein 4 [ 5 ] (HE4) - is a protein that in humans is encoded by the WFDC2 gene . [ 6 ] [ 7 ] [ 8 ] HE4 is a tumor marker of ovarian cancer , with 80% sensitivity at a cut-off of 150 pmol/L. [ 9 ] This gene encodes a protein that is a member of the WFDC domain family. The WFDC domain, or WAP Signature motif, contains eight cysteines forming four disulfide bonds at the core of the protein, and functions as a protease inhibitor in many family members. This gene is expressed in pulmonary epithelial cells among other tissues, and was also found to be expressed in some ovarian cancers. [ 5 ] The encoded protein is a small secretory protein, which may be involved in sperm maturation. [ 8 ] This article on a gene on human chromosome 20 is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/WFDC2
WFF 'N PROOF is a game of modern logic , developed to teach principles of symbolic logic . It was developed by Layman E. Allen in 1962 [ 1 ] [ 2 ] a former professor of Yale Law School and the University of Michigan . As marketed in the 1960s WFF 'N PROOF was a series of 20 games of increasing complexity, varying with the logical rules and methods available. All players must be able to recognize a " well-formed formula " (WFF in Łukasiewicz notation ), to assemble dice values into valid statements (WFFs) and to apply the rules of logical inference so as to complete a proof . [ 1 ] Games are played by two or more people. The first player to roll the cubes sets a WFF as a Goal. Each player then tries to construct (with whatever is available) a complete logical proof of the goal. The Solution to the goal is the Premises which they started their proof with, and the Rules they used to get to the Goal. Players take turns moving to the Essentials, Permitted Premises, or Permitted Rules sections of the mat. Any cube moved to Essentials must be used in any Solution, and must be an essential part of that solution; any cube value in Permitted Premises may be used as part of a premise; any cube value in Permitted Rules may be used as part of a Rule. Thus the players themselves shape the Solution, forcing one another to create new Solutions in response to moves. At any point a player may challenge the last mover, if they feel the last mover has made a mistake. There are three types of Challenges. A-Flub means that the Challenger can make a Solution using the cubes in Required and Permitted and one more cube from Resources. P-Flub, or Challenge Impossible means the player believes the Mover cannot make a Solution using the cubes in Required, Permitted, and Resources. C-A-Flub means that the Challenger believes that the Mover, or some previous mover, missed an A-Flub. After a challenge, at least one player must show a correct Solution on paper. The scoring goes like this: The player who wins the challenge scores 10 points. The loser of the challenge scores 6. If there is a third player, he must side with or against the Challenger and scores points depending upon that decision. The name is a play on Whiffenpoofs , [ citation needed ] an a cappella singing group established at Yale University in 1909. [ 3 ]
https://en.wikipedia.org/wiki/WFF_'N_PROOF
The WHO Model List of Essential Medicines for Children (aka Essential Medicines List for Children [ 1 ] or EMLc [ 1 ] ), published by the World Health Organization (WHO), contains the medications considered to be most effective and safe in children up to twelve years of age to meet the most important needs in a health system . [ 2 ] [ 3 ] The list is divided into core items and complementary items. [ 4 ] The core items are deemed to be the most cost-effective options for key health problems and are usable with little additional health care resources. [ 4 ] The complementary items either require additional infrastructure such as specially trained health care providers or diagnostic equipment or have a lower cost–benefit ratio . [ 4 ] The first list for children was created in 2007, and the list is in its 9th edition as of 2023 [update] . [ 4 ] [ 5 ] [ 6 ] [ 7 ] Note: An α indicates a medicine is on the complementary list. [ 4 ] Reserve antibiotics are last-resort antibiotics. The EML antibiotic book was published in 2022. [ 8 ] [ 9 ] [ 10 ] No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. Recommendations for all Recommendations for certain regions Recommendations for some high-risk populations Recommendations for immunization programmes with certain characteristics No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. No listings in this section. eEML - Electronic Essential Medicines List
https://en.wikipedia.org/wiki/WHO_Model_List_of_Essential_Medicines_for_Children
Woodchuck Hepatitis Virus (WHV) Posttranscriptional Regulatory Element (WPRE) is a DNA sequence that, when transcribed, creates a tertiary structure enhancing expression . The sequence is commonly used in molecular biology to increase expression of genes delivered by viral vectors . [ 1 ] WPRE is a tripartite regulatory element with gamma, alpha, and beta components. The alpha component is 80bp long: GCCACGGCGGAACTCATCGCCGCCTGCCTTGCCCGCTGCTGGACAGGGGCTCGGCTGTTGGGCACTGACAATTCCGTGGT [ 2 ] When used alone without the gamma and beta WPRE components, the alpha component is only 9% as active as the full tripartite WPRE. The sequence for full tripartite WPRE is: AATCAACCTCTGGATTACAAAATTTGTGAAAGATTGACTGGTATTCTTAACTATGTTGCTCCTTTTACGCTATGTGGATACGCTGCTTTAATGCCTTTGTA TCATGCTATTGCTTCCCGTATGGCTTTCATTTTCTCCTCCTTGTATAAATCCTGGTTGCTGTCTCTTTATGAGGAGTTGTGGCCCGTTGTCAGGCAACGTGGCGTG GTGTGCACTGTGTTTGCTGACGCAACCCCCACTGGTTGGGGCATTGCCACCACCTGTCAGCTCCTTTCCGGGACTTTCGCTTTCCCCCTCCCTATTGCCACGGCGG AACTCATCGCCGCCTGCCTTGCCCGCTGCTGGACAGGGGCTCGGCTGTTGGGCACTGACAATTCCGTGGTGTTGTCGGGGAAGCTGACGTCCTTTCCATGGCTGCT CGCCTGTGTTGCCACCTGGATTCTGCGCGGGACGTCCTTCTGCTACGTCCCTTCGGCCCTCAATCCAGCGGACCTTCCTTCCCGCGGCCTGCTGCCGGCTCTGCGG CCTCTTCCGCGTCTTCGCCTTCGCCCTCAGACGAGTCGGATCTCCCTTTGGGCCGCCTCCCCGCCTG This sequence has 100% homology with base pairs 1093 to 1684 of the Woodchuck hepatitis B virus (WHV8) genome. When used in the 3' untranslated region (UTR) of a mammalian expression cassette , it can significantly increase mRNA stability and protein yield. [ 2 ] This genetics article is a stub . You can help Wikipedia by expanding it . This molecular biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/WHP_Posttranscriptional_Response_Element
WI-38 is a diploid human cell line composed of fibroblasts derived from lung tissue of a 3-month-gestation female fetus. [ 1 ] [ 2 ] The fetus came from a legal abortion performed in Sweden in 1962 [ 3 ] . The cell line was isolated by Leonard Hayflick the same year, [ 4 ] [ 5 ] and has been used extensively in scientific research, with applications ranging from developing important theories in molecular biology and aging to the production of most human virus vaccines . [ 6 ] The uses of this cell line in human virus vaccine production is estimated to have saved the lives of millions of people. [ 4 ] [ 7 ] [ 8 ] The WI-38 cell line stemmed from earlier work by Hayflick growing human cell cultures. [ 2 ] In the early 1960s, Hayflick and his colleague Paul Moorhead at the Wistar Institute in Philadelphia , Pennsylvania discovered that when normal human cells were stored in a freezer, the cells remembered the doubling level at which they were stored and, when reconstituted, began to divide from that level to roughly 50 total doublings (for cells derived from fetal tissue). Hayflick determined that normal cells gradually experience signs of senescence as they divide, first slowing before stopping division altogether. [ 2 ] [ 5 ] This finding is the basis for the Hayflick limit , which specifies the number of times a normal human cell population will divide before cell division stops. [ 9 ] Hayflick's discovery later contributed to the determination of the biological roles of telomeres. [ 10 ] Hayflick claimed that the finite capacity of normal human cells to replicate was an expression of aging or senescence at the cellular level. [ 2 ] [ 5 ] [ 9 ] During this period of research, Hayflick also discovered that if cells were properly stored in a freezer, cells would remain viable and that an enormous number of cells could be produced from a single starting culture. One of the cell strains that Hayflick isolated, which he named WI-38, was found to be free of contaminating viruses, unlike the primary monkey kidney cells then in use for virus vaccine production. [ 5 ] In addition, WI-38 cells could be frozen, then thawed and exhaustively tested. These advantages led to WI-38 quickly replacing primary monkey kidney cells for human virus vaccine production. [ 7 ] [ 8 ] [ 11 ] WI-38 has also been used for research on numerous aspects of normal human cell biology. [ 8 ] [ 9 ] [ 11 ] WI-38 was invaluable to early researchers, especially those studying virology and immunology, since it was a readily available cell line of normal human tissue. Unlike the HeLa cell line , which were cancerous cells, WI-38 was a normal human cell population. Researchers in labs across the globe have since used WI-38 in their discoveries, most notably Hayflick in his development of human virus vaccines. [ 7 ] Infected WI-38 cells secrete the virus, and can be cultured in large volumes suitable for commercial production. [ 2 ] Virus vaccines produced in WI-38 have prevented disease or saved the lives of billions of people. [ 7 ] [ 8 ] Vaccines produced in WI-38 include those made against adenoviruses , rubella , measles , mumps , varicella zoster , poliovirus , hepatitis A and rabies . [ 6 ] [ 7 ] [ 8 ] [ 11 ] The WI-38 cell line was one of the first cell lines whose diploid genome was sequenced. [ 12 ] This is critical because most human genome sequences have not been resolved to chromosome level, that is, it remained largely unclear which genetic variant is on which of the two chromatids . Besides being an important cell line for experimental studies (e.g. on aging), the WI-38 line is believed to have remained diploid since it was originally established in 1961. Nearly 60 years later, karyotyping by Soifer et al. (2020) showed that the WI-38 genome has not acquired major rearrangements such as translocations . More importantly, the de novo phased assembly confirms that the genome has in fact remained diploid and retained its heterozygosity throughout. It is therefore a good model for genome sequencing and serves as another reference genome. [ 12 ]
https://en.wikipedia.org/wiki/WI-38
The WIEN2k package is a computer program written in Fortran that performs quantum mechanical calculations on periodic solids . It uses the full-potential (linearized) augmented plane-wave and local-orbitals [FP-(L)APW+lo] basis set to solve the Kohn–Sham equations of density functional theory . WIEN2k was originally developed by Peter Blaha and Karlheinz Schwarz from the Institute of Materials Chemistry of the Vienna University of Technology . The first public release of the code was done in 1990. [ 4 ] Then, the next releases were WIEN93, WIEN97, and WIEN2k. [ 5 ] The latest version WIEN2k_24.1 was released in August 2024. [ 6 ] It has been licensed by more than 3400 user groups and has about 16000 citations on Google scholar (Blaha WIEN2k). WIEN2k uses density functional theory to calculate the electronic structure of a solid. It is based on the most accurate scheme for the calculation of the bond structure-the full potential energy (linear) augmented plane wave ((L) APW) + local orbit (lo) method. WIEN2k uses an all-electronic solution, including relativistic terms. WIEN2k works with both centrosymmetric and non-centrosymmetric lattices, with 230 built-in space groups. It supports a variety of functionals including local-density approximation (LDA), many different generalized gradient approximations (GGA), Hubbard models , on-site hybrids, meta-GGA and full hybrids, and can also include spin-orbit coupling and Van der Waals terms. It can be used for structure optimization, both unit cell dimensions and internal atomic positions. For the latter an adaptive fixed-point iteration is used which simultaneously solves for atomic positions and the electron density. [ 7 ] The code supports both OpenMP and MPI parallelization, which can be used efficiently in combination. It also supports parallelization by dispatching parts of the calculations to different computers. A number of different properties can be calculated using the densities, many of these in packages which have been contributed by users over the years. WIEN2K can be used to calculate: This scientific software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/WIEN2k
WIL Research Laboratories, LLC (acquired in 2016 and renamed Charles River Laboratories Ashland, LLC) [ 2 ] was a contract research organization (CRO), privately held for 40 years, that provided product safety toxicological research, metabolism , bioanalytical , pharmacological , and formulation services to the pharmaceutical , biotechnology , chemical , agrochemical , and food products industries, as well as manufacturing support for clinical trials . WIL Research was well-known internationally in many disciplines, and considered by many industry experts to be the premier laboratory in the world for developmental and reproductive toxicology (DART). [ 3 ] WIL Research Laboratories was founded in 1976 in Cincinnati , Ohio by G. Bruce Briggs , Ralph S. Hodgdon , and Robert W. Brigham , with Briggs serving as the company's first president. [ citation needed ] The company was initially a limited mammalian toxicological testing laboratory that conducted short-term studies for several clients in the Cincinnati area. In 1978, Great Lakes Chemical Corporation acquired WIL Research Laboratories. [ 4 ] By 1980, WIL Research outgrew its facilities in Cincinnati , subsequently acquired the 75-acre Hess & Clark research facility on the outskirts of Ashland , Ohio , and by 1982 had moved its operations to the new location. [ 5 ] The move to Ashland enabled WIL to conduct a larger number of studies as it began to expand its client base. [ 6 ] Dr. Joseph F. Holson was named President and Director of WIL Research Laboratories in 1988. Under his leadership over the next 20 years, WIL Research grew from 31 employees into a dynamic contract research organization employing more than 600 individuals at the Ashland site. [ 7 ] This success was attributed to the company's entrepreneurial scientific management, study director-centric business model , internationally recognized scientific prowess (particularly in DART), internally developed innovations (including the industry's first protocol-driven toxicology data management software system), and strong involvement in the Ashland community. [ 7 ] [ 8 ] During Holson's tenure, WIL Research continuously expanded its scientific capabilities, facilities, and staffing levels. During this period, the company grew from a limited mammalian toxicology research laboratory into a robust interdisciplinary CRO offering developmental and reproductive toxicology, neurotoxicology , inhalation toxicology, developmental neurotoxicology, large animal toxicology, juvenile toxicology, safety pharmacology , metabolism , analytical and bioanalytical chemistry , and formulation services to a globally diverse client base. [ 9 ] Underpinning the continuous expansion of service capabilities was a steady expansion of the company's facilities from approximately 30,000 square feet to more than 300,000 square feet of dedicated laboratory , vivarium , and support services space. [ 7 ] At the heart of Dr. Holson's vision, though, was a drive to continually deepen the company's talent pool, as the number of employees in Ashland grew from 31 to more than 600. Joseph Holson was well-known as an energetic, outgoing leader with a vision for the company that revolved around the success of his staff and ongoing recruitment efforts. [ 7 ] Critical to the success of WIL Research was a continuous investment in staff training, as new biologists typically underwent a 9-12 month training period and all employees regularly completed continuing education not only in their specific areas of expertise but also in the subjects of animal care and welfare, Good Laboratory Practices , and research integrity. [ 7 ] [ 8 ] Many of the internal training programs developed at WIL Research were highly regarded and requested by clients and industry partners. A key driver of WIL's steady growth was its study director-centric business model , which viewed each study director as an individual business unit with scientific, project management, and marketing responsibilities. This approach was in contrast to the typical division within CROs between science and marketing. WIL Research emphasized direct scientist-to-scientist interaction as much as possible across the entire scope of each project, which gained the company numerous accolades from its clients. [ 10 ] Examples of the types of projects undertaken by WIL Research included studies of drugs for the treatment of herpes , Alzheimers' disease , glaucoma , cancer , and AIDS , numerous pesticides , and replacement chemicals for ozone-depleting chlorofluorocarbons in fire extinguishers. [ 8 ] Although highly respected in many disciplines, WIL Research was considered by many to be the leading laboratory in the world for developmental and reproductive toxicology (DART). [ 3 ] This leadership was driven by Dr. Joseph Holson , an internationally recognized authority in the field. The DART division at WIL Research, led initially by Dr. Holson and subsequently by Mr. Mark D. Nemec and Dr. Donald G. Stump, became known not only for high-quality regulatory guideline studies, but also for innovative, specialized DART research. In 1978, as a result of expanding toxicology testing services, the WIL Toxicology Data Management System (WTDMS™) was developed. [ 6 ] This protocol-driven software system was the first in the CRO industry and became the prototype for other major toxicology testing laboratories. [ 8 ] WTDMS™ was licensed to several other toxicology testing laboratories, and was used continuously by WIL Research Laboratories for nearly forty years prior to its gradual replacement by the Provantis system. [ 11 ] While WIL Research depended on the broader Ashland area for a steady supply of qualified personnel, it also contributed extensively to Ashland's economic growth, becoming one of the largest employers in the county. [ 5 ] [ 7 ] [ 10 ] During Holson's tenure, the company invested approximately $62 million in facilities renovation and expansion. In a talk given to the local Rotary club, Holson added that WIL Research at that time served approximately 550 clients (domestic and international), most of whom regularly visited Ashland to monitor their studies. In 2006, WIL Research received the Golden Oak award from the mayor of Ashland , an award recognizing "the foresight, diligence and unselfishness of individuals or organizations who contribute to new growth, strengthen the roots or improve the overall community of Ashland ." [ 12 ] WIL Research also actively supported Ashland University , with many of its senior scientists serving as adjunct professors in their areas of expertise, especially in the undergraduate toxicology program, which the company helped begin in 1984. [ 8 ] [ 13 ] [ 14 ] Dr. Joseph Holson also served on Ashland University 's Science Advisory Board (1990-2008) and Board of Trustees (1993-1998), and gave the initial lecture, entitled "Risk and Regulation," of a year-long lecture series in support of the university's Environmental Studies program in 1995. [ 15 ] After nearly two decades of sustained organic growth, Joseph Holson led WIL Research through an initial period of private capital-financed expansion. In 2004, Holson and four other senior executives (Mark D. Nemec, Dr. Christopher P. Chengelis, Dr. Daniel W. Sved, and James M. Rudar) initiated a management buyout (in partnership with private equity firm Behrman Capital) from Great Lakes Chemical Corporation which led to the formation of a holding company (WRH, Inc.). [ 16 ] The expansion continued with the merger of Biotechnics, LLC ( Hillsborough, NC , led by Dr. George Parker) with WIL Research operations in Ashland , the acquisitions of Notox Beheer BV ( 's-Hertogenbosch, Netherlands , let by Jan van der Hoeven, Dr. Wilbert Frieling, and Dr. Ilona Enninga) [ 17 ] and QS Pharma LLC ( Boothwyn, PA ), [ 18 ] and the subsequent $500 million sale of WRH, Inc. to American Capital, Ltd. (NASDAQ:ACAS) in 2007. [ 19 ] [ 20 ] After the sale to ACAS, Dr. Holson served as Vice President and Chief Scientific Officer of the global entity while continuing to serve as President and Director of WIL Research Laboratories in Ashland, Ohio until his retirement in November 2008. Upon Dr. Holson's retirement, Mr. Nemec was appointed President and Chief Operating Officer of the Ashland flagship facility, [ 21 ] and Dr. Chengelis was named Vice President and Chief Scientific Officer . [ 22 ] Under the ownership of American Capital, David Spaight was named Chairman and CEO of the global holding company in 2010, which undertook a re-branding and global integration effort. [ 23 ] During the ACAS-led period, growth of the company occurred primarily through additional acquisitions, including those of Midwest BioResearch, LLC ( Skokie, IL , led by Dr. Michael Schlosser) [ 24 ] and Ricerca Bioscience's pharmaceutical services facility in Lyon, France (led by Stéphane Bulle). [ 25 ] [ 26 ] In addition, a new safety assessment facility in Schaijk , Netherlands (close to the existing Den Bosch site) was opened in 2015 to augment the European operations. [ 27 ] These activities combined to increase the total number of employees in the global entity to more than 1300, with total 2015 revenues of $215 million. [ 2 ] In early 2016, Wilmington, MA -based Charles River Laboratories International, Inc. (NYSE:CRL) , led by James C. Foster , acquired the global holdings of WIL Research for $585 million in cash. [ 28 ] [ 29 ] The platform WIL Research Laboratories facility in Ashland, OH was subsequently renamed to Charles River Laboratories Ashland, LLC. [ 30 ]
https://en.wikipedia.org/wiki/WIL_Research_Laboratories
The WIMM One is a developer device for the WIMM platform produced by WIMM Labs . It is a wearable computing device running a modified version of the Android operating system. It comes preloaded with several apps. Additional applications can be downloaded from the micro app store or side-loaded over USB. The WIMM One has a transflective bi-modal screen. In high power mode it can reproduce colour images with an 18-bit colour depth ( OS limited to 16-bit). In low-power mode it can reproduce 4-bit grayscale images. [ 1 ] The WIMM One's screen is on all the time the device is powered on. This allows information to be readily available to the user without them having to interact with the device. When the device is in low power mode the screen is updated once per minute. When the device is woken into high-power mode the screen refreshes at 60 fps and fully interactive apps can be run. The WIMM One has a complement of sensors similar to that of a smart phone including: A 14-pin connector runs across the back of the WIMM One. This is used for charging and USB communications. It also supports data communication with accessories developed for the WIMM platform. For access to the outside world the WIMM One has 2 radios. One for Wi-Fi 802.11b/g and one for Bluetooth 2.1. These are aggressively power managed by the OS. The Wi-Fi radio is only turned on for short bursts where it is used to sync data. The Bluetooth radio can be used to maintain a connection with a smart phone running Android, BlackBerryOS or iOS . This allows the WIMM One to react to telephony events such as incoming calls, for example allowing calls to be rejected to voicemail. When paired with Android smartphones, it will also receive SMS and contact information. The Bluetooth link can also be used to sync data by taking advantage of the paired device's internet connectivity. Syncing happens at a user set interval between 30 minutes and 12 hours. Applications can also request an immediate network connection off of schedule. Due to the form factor of the device being significantly different from the majority of Android devices, many of the default Android UI elements are unwieldy on the WIMM. To compensate for this a set of custom widgets and APIs are provided for developers. These include widgets for text entry and dialog boxes . Applications developed for the WIMM One can be uploaded to the * provided by WIMM Labs. Instead of heavy management on the module's screen, users can manage their apps and standard module settings through a web-based console from their desktop or smartphone. Standard settings include calendar setup (Outlook and Google), global cities, sync intervals, and date/time formats. The WIMM One automatically installs applications that have been selected in the web console. The WIMM One has been generally well received with significant press gained from The Verge [ 2 ] and Engadget . [ 3 ] This has included a front page feature on Engadget's Distro magazine. [ 4 ] According to WIMM's Website, in the summer of 2012 WIMM Labs entered into an exclusive, confidential relationship with an unnamed company and ceased sales of the Developer Preview Kit. Existing WIMM One owners can continue to synchronize their devices. As of August 2013 it is known that Google is the partner of this relationship. [ 5 ]
https://en.wikipedia.org/wiki/WIMM_One
WIPI ( / ˈ w ɪ p i / ; Korean pronunciation: [wipʰi] ), W ireless I nternet P latform for I nteroperability, was a middleware platform used in South Korea that allowed mobile phones, regardless of manufacturer or carrier, to run applications. Much of WIPI was based on Java , but it also included the ability to download and run compiled binary applications as well. The specification was created by the Mobile Platform Special Subcommittee of the Korea Wireless Internet Standardization Forum (KWISF). The South Korean government declared that all cellular phones sold in that country include the WIPI platform to avoid inordinate competition between mobile companies, but the policy has been withdrawn from April 2009. This mobile technology related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/WIPI_(platform)
The WIPO Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge [ 1 ] or GRATK Treaty is an international legal instrument to combat biopiracy [ 2 ] through disclosure requirements for patent applicants whose inventions are based on genetic resources and/or associated traditional knowledge . [ 3 ] The treaty was concluded at the headquarters of the World Intellectual Property Organization (WIPO) in Geneva , Switzerland , on 24 May 2024, [ 4 ] after more than two decades [ 5 ] of previous developments by WIPO's Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC). [ 6 ] The treaty was deemed "historic in many regards" [ 7 ] by some observers, qualified by the Indigenous Caucus [ 8 ] as a "first step towards guaranteeing just and transparent access to these resources." [ 9 ] The IGC was established in 2001 by the General Assembly of WIPO . [ 10 ] [ 11 ] Since 2010, the mandate of the IGC has remained that of concluding a consensual text which would bridge the gaps between the numerous existing international legal instruments provide some, but insufficient protection on either traditional knowledge , traditional cultural expressions , or genetic resources ( UNDRIP , Convention on Biological Diversity , Nagoya Protocol , FAO plant treaty , UNESCO conventions on culture and intangible heritage, etc.), none of which include explicit protections for indigenous peoples and local communities . [ 12 ] [ 13 ] IGC's negotiations were suspended in 2020 because of the pandemic of SARS-CoV-2 , and resumed in 2022. [ 13 ] In 2022, the IGC agreed to move on to the next steps of treaty negotiation, and WIPO agreed to convene a Diplomatic Conference by 2024 to consider a draft treaty that the Committee had been working on. [ 14 ] The selection of the draft text that had to serve as a basis for the negotiations of the final text of the treaty received some criticism from civil society observers. [ 15 ] [ 16 ] The 2022 WIPO General Assembly decided that a short version of the draft (the "Chair's text") which had been drafted by Australian ambassador Ian Gross, Chair of the IGC in 2019, would be the basis for the treaty's negotiations. Prior to that decision, the text which was expected to be used as basis for the negotiations was the "Consolidated text", a more comprehensive document on which IGC Member States had been working on by consensus during years. [ 15 ] Contrary to the Consolidated text which addressed traditional knowledge and traditional cultural expressions as such, and different forms of intellectual property, the Chair's text focused only on genetic resources and the patent system . [ 17 ] In August 2023, India submitted a proposal with a series of amendments to the Chair’s text , aiming to add back some elements from the Consolidated text in the discussion. Ahead of the Diplomatic Conference, two extraordinary meetings were convened to prepare the Conference: The Special Session which took place from 4 to 8 September 2023, reviewed part of the Chair's text containing substantive articles. The Preparatory Committee which was held the week after, addressed administrative and procedural parts of the draft. [ 18 ] Jointly, these two meetings yielded a revised draft, which serves as the basis for the 2024 Diplomatic Conference discussions. The Preparatory Committee also adopted Draft Rules of Procedure for the Diplomatic Conference, as well as a List of Invitees. On 13 September 2023, the committee had to suspend its session due to the absence of submission by Member States of proposals to host the Diplomatic Conference. On 13 December, the committee reconvened to adopt a decision to hold the Diplomatic Conference at WIPO's headquarters in Geneva, facing the lack of alternative proposals. [ 19 ] As explained on the website of the Diplomatic Conference: On July 21, 2022, the WIPO General Assembly decided to convene a Diplomatic Conference to conclude an International Legal Instrument Relating to Intellectual Property, Genetic Resources and Traditional Knowledge Associated with Genetic Resources no later than 2024. [ 20 ] The Diplomatic Conference was held in Geneva, Switzerland, between 13 and 24 May 2024. [ 19 ] During the Conference, the draft resulting from the Special Session and Preparatory Committee was discussed and amended. The final legal instrument, the WIPO Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge (often referred to by its acronym "GRATK" [ 5 ] [ 21 ] ) was adopted in the night [ 22 ] of Thursday 23 to Friday 24 May 2024, and opened for signature the 24 May in the afternoon, at the WIPO headquarter in Geneva. [ 2 ] This is the first WIPO Treaty to address the interface between intellectual property, genetic resources and traditional knowledge and the first WIPO Treaty to include provisions specifically for Indigenous Peoples as well as local communities. The Treaty, once it enters into force with 15 contracting parties, will establish in international law a new disclosure requirement for patent applicants whose inventions are based on genetic resources and/or associated traditional knowledge. [ 4 ] The Treaty was concluded on 24 May 2024 and immediately opened for signature. Under the Treaty's Article 16, it is stated that the Treaty will be "open for signature at the Diplomatic Conference in Geneva and thereafter […] for one year after its adoption." [ 1 ] At the closing of the Diplomatic Conference, on 24 May 2024, the Treaty was signed by 30 countries: Algeria, Bosnia and Herzegovina, Brazil, Burkina Faso, Central African Republic, Chile, Colombia, Congo, Cote d'Ivoire, Eswatini, Ghana, Lesotho, Madagascar, Malawi, Marshall Islands, Morocco, Namibia, Nicaragua, Niger, Nigeria, Niue, North Korea, Paraguay, Saint Vincent & the Grenadines, São Tomé and Príncipe, Senegal, South Africa, Tanzania, Uruguay, and Vanuatu. [ 5 ] Under Article 17, the Treaty is planned to enter into force 3 months after ratification or accession by 15 countries . Signature, ratification and accession is open to any Member State of the WIPO , under the Treaty's Article 12. Countries that sign the Treaty within the first year period (until 24 May 2025) have to further ratify it in order for the Treaty to enter into force. Countries deciding to join after the initial one-year period will join through "adhesion" (equivalent to both signature and ratification). In December 2024, 39 countries have signed the treaty. [ 23 ] Wendland, Wend (October 3, 2024). "The new WIPO Treaty 25 years in the making: what does it mean and what happens next?" . WIPO . Retrieved 2025-01-10 .
https://en.wikipedia.org/wiki/WIPO_Treaty_on_Intellectual_Property,_Genetic_Resources_and_Associated_Traditional_Knowledge
WIRIS is a company, legally registered as Maths for More , providing a set of proprietary HTML-based JavaScript tools which can author and edit mathematical formulas , [ 1 ] execute mathematical problems [ 2 ] and show mathematical graphics on the Cartesian coordinate system . WIRIS equation editor [ 3 ] is a native browser application , with a light server-side, that supports both MathML and LaTeX . Since 2017, after buying Design Science, a US-based a developer of MathType desktop software, WIRIS rebranded their web equation editor as MathType by WIRIS . [ 4 ] WIRIS is based in Barcelona, Spain and was founded by teachers and former students from the Technical University of Catalonia (Barcelona Tech) coordinated by Professor Sebastià Xambó. [ 5 ] This article about mathematics software is a stub . You can help Wikipedia by expanding it . This software company article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/WIRIS
WISDOM ( Water Ice and Subsurface Deposit Observation on Mars ) is a ground-penetrating radar that is part of the science payload on board the European Space Agency 's Rosalind Franklin rover , [ 2 ] tasked to search for biosignatures and biomarkers on Mars. The rover is planned to be launched not earlier than 2028 and land on Mars in 2029. The search for evidence of past or present life on Mars is the principal objective of the ExoMars programme. If such evidence exists, it will most likely be in the subsurface, where organic molecules are shielded from the destructive effects of ionizing radiation and atmospheric oxidants. For this reason, the Rosalind Franklin rover mission has been optimized to investigate the subsurface and sample those locations where conditions for the preservation of evidence of past life are most likely to be found. WISDOM is a step frequency radar that operates in the frequency range from 0.5 to 3 GHz. [ 3 ] [ 4 ] It will provide high-resolution 3D imaging down to a depth of 3 metres. [ 5 ] WISDOM will use UHF radar pulses to provide the three-dimensional geological context of the shallow subsurface underneath the ExoMars rover. [ 3 ] [ 5 ] It will be used to identify optimal drilling sites and to ensure the safety of the core drill , as well as investigate the local distribution and state of subsurface water ice and brine. [ 3 ] It can transmit and receive signals using two, small Vivaldi-antennas mounted on the aft section of the rover. Electromagnetic waves penetrating into the ground are reflected at places where there is a sudden transition in the electrical parameters of the soil. By studying these reflections it is possible to construct a stratigraphic map of the subsurface and identify underground targets down to 2 to 3 m (7 to 10 ft) in depth, comparable to the 2 m reach of the rover's drill. These data, combined with those produced by the PanCam and by the analyses carried out on previously collected samples, will be used to support drilling activities. [ 6 ] Field tests with a remote-controlled rover, show that WISDOM can be operated continuously while the rover is in motion at a reduced speed of approximately 20  m/h. [ 3 ] All WISDOM data will be relayed to Earth via ExoMars Trace Gas Orbiter , and processing will be performed on Earth. The WISDOM team consists of scientists from France, Germany, Italy, Norway, Austria, United Kingdom and the United States. The Principal Investigator is Valérie Ciarletti, from LATMOS, France. [ 5 ] The Co-Principal Investigator is Svein-Erik Hamran from Norway. [ 7 ]
https://en.wikipedia.org/wiki/WISDOM_(radar)
WIYN Consortium founding members were the University of Wisconsin–Madison (W), Indiana University (I), Yale University (Y), and the National Optical Astronomy Observatories (N). Yale University withdrew from the WIYN consortium on April 1, 2014, and was replaced by the University of Missouri in the fall of that year. In 2015, a NASA-NSF partnership called NN-EXPLORE effectively took over NOAO's share, although NOAO still manages the operations. [ 1 ] Purdue University joined in 2017 for a three-year period. [ 2 ] The consortium operates two telescopes of 3.5 m and 0.9 m diameters. The universities financed the construction of the WIYN Observatory at Kitt Peak National Observatory (KPNO) in 1994. In 2001, the WIYN Consortium took over control of the KPNO 36-inch (910 mm) telescope, built in 1960, and rechristened it as the WIYN 0.9 m Telescope . This small but popular telescope was in danger of being mothballed for budgetary reasons. This astronomy -related article is a stub . You can help Wikipedia by expanding it . This article about a scientific organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/WIYN_Consortium
WMT Digital is a web engineering and technology company headquartered in Miami, Florida . The company provides engineering web platforms for colleges and professional sports leagues including content creation, streaming services, subscriptions, ticketing, and marketing. [ 1 ] WMT Digital began when Andres Focil was tasked with helping his alma mater by enhancing the University of Miami's athletic website's search engine presence, at a time when Miami's professional teams – Miami Heat , Miami Dolphins and Miami Marlins – were dominating the headlines in Miami. He was able to rework the university's athletic department's positioning digitally through search optimization, video and digital strategy, and soon their website was ranking higher in searches than the Miami Heat. [ 2 ] Arkansas contacted him to see if he could help improve their search presence and revamp their college athletics website. After successful collaborations with Arkansas , Florida State, Clemson, LSU, and Georgia Tech, Focil collaborated with RevelXP (formerly Tailgate Guys) and eventually integrated their reservation, catering and online store platforms into WMT Digital. WMT Digital could now offer digital platforms for merchandise, content, video, data warehouse, ticketing and streaming into their client's web portals. [ 3 ] [ 4 ] WMT Digital collaborated with LaLiga and Clemson to design a digital video platform that allowed coaches to virtually interact with recruits, stream game footage and offer them virtual tours of the campus on a single platform during the COVID-19 pandemic. [ 5 ] [ 6 ] The platform also provided LaLiga fans Fwith a new interactive watch party feature. [ 7 ] [ 8 ] In 2021, WMT Digital helped the University of Notre Dame release a streaming app similar to Netflix, Fighting Irish TV , which allows subscribers to access AI driven highlights, special content and every Fighting Irish game since 1991. [ 9 ] [ 10 ] [ 11 ] The company's next major expansion was creating a stadium app for Ohio State's Horseshoe Stadium, which included access to concessions, parking, and game information. Creating an app for San Diego's Snapdragon Stadium was next. The stadium is home to San Diego State as well as professional teams, including the San Diego Legion and San Diego Wave FC . [ 12 ] WMT Digital has clients such as USA Basketball , NFL , Professional Volleyball Federation, NASCAR, LaLiga, National Association of Basketball Coaches , and Collegiate Sports Connect. [ 13 ] [ 14 ] [ 15 ] In 2024, WMT Digital finished a redesign of the USA Basketball website ahead of the 2024 Summer Olympics . [ 15 ] In June 2024, WMT Digital acquired Aloompa, a mobile app provider for live event experiences. [ 16 ] In August of 2024, they announced they had acquired Event Dynamic, a leading provider of mobile event technology. [ 17 ] [ 18 ] In 2023, Andres and the WMT Digital team developed The Six, an AI platform that automatically generates game recaps from box scores. [ 19 ] The software uses proprietary self-hosted large language models to scan game information and automatically produces recaps of games for university athletic departments. [ 20 ] The Six's name is a reference to the sixth man in sports, a utility player who fills the gaps since the software is meant to help produce content for non-revenue sports like baseball, ice hockey, soccer and volleyball. [ 21 ] The Six can even be configured to mimic the writing style of a journalist. The software has been beta-tested with Clemson, Vanderbilt and Arkansas. [ 22 ]
https://en.wikipedia.org/wiki/WMT_Digital
Tungsten dichloride dioxide , or tungstyl chloride is the chemical compound with the formula W O 2 Cl 2 . It is a yellow solid. It is used as a precursor to other tungsten compounds. Like other tungsten halides, WO 2 Cl 2 is sensitive to moisture, undergoing hydrolysis. WO 2 Cl 2 is prepared by ligand redistribution reaction from tungsten trioxide and tungsten hexachloride : Using a two-zone tube furnace , a vacuum-sealed tube containing these solids is heated to 350 °C. The yellow product sublimes to the cooler end of the reaction tube. No redox occurs in this process. [ 2 ] An alternative route highlights the oxophilicity of tungsten: [ 3 ] This reaction, like the preceding one, proceeds via the intermediacy of WOCl 4 . Gaseous tungsten dichloride dioxide is a monomer . [ 4 ] Solid tungsten dichloride dioxide is a polymer consisting of distorted octahedral W centres. The polymer is characterized by two short W-O distances, typical for a multiple W-O bond, and two long W-O distances more typical of a single or dative W-O bond. [ 5 ] Tungsten forms a number of oxyhalides including WOCl 4 , WOCl 3 , WOCl 2 . The corresponding bromides ( WOBr 4 , WOBr 3 , WOBr 2 ) are also known as is WO 2 I 2 . [ 6 ] WO 2 Cl 2 is a Lewis acid , forming soluble adducts of the type WO 2 Cl 2 L 2 , where L is a donor ligand such as bipyridine and dimethoxyethane . Such complexes often cannot be prepared by depolymerization of the inorganic solid, but are generated in situ from WOCl 4 . [ 7 ]
https://en.wikipedia.org/wiki/WO2Cl2