text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Vismodegib , sold under the brand name Erivedge , is a medication used for the treatment of basal-cell carcinoma (BCC). [ 2 ] The approval of vismodegib on January 30, 2012, represents the first Hedgehog signaling pathway targeting agent to gain U.S. Food and Drug Administration (FDA) approval. [ 3 ] The drug is also undergoing clinical trials for metastatic colorectal cancer , small-cell lung cancer , advanced stomach cancer , pancreatic cancer , medulloblastoma and chondrosarcoma as of June 2011 [update] . [ 4 ] The drug was developed by the biotechnology / pharmaceutical company Genentech . [ 3 ]
Vismodegib is indicated for people with basal-cell carcinoma (BCC) which has metastasized to other parts of the body, relapsed after surgery, or cannot be treated with surgery or radiation. [ 3 ] [ 5 ]
The substance acts as a cyclopamine -competitive antagonist of the smoothened receptor (SMO) which is part of the Hedgehog signaling pathway . [ 4 ] SMO inhibition causes the transcription factors GLI1 and GLI2 to remain inactive, which prevents the expression of tumor mediating genes within the hedgehog pathway. [ 6 ] This pathway is pathogenetically relevant in more than 90% of basal-cell carcinomas. [ 7 ]
In clinical trials, common side effects included gastrointestinal disorders (nausea, vomiting, diarrhoea, constipation), muscle spasms, fatigue, hair loss, and dysgeusia (distortion of the sense of taste). [ 2 ]
Vismodegib has undergone several promising phase I and phase II clinical trials for its use in treating medulloblastoma. [ 8 ]
|
https://en.wikipedia.org/wiki/Vismodegib
|
VISTA Variables in the Via Lactea – The VVV Survey – is observing the Milky Way 's bulge and southern disk in the near-infrared using the capabilities of the VISTA Telescope at Paranal, Chile.
The survey started in February 2010 and was completed in 2016. On 2015 the VVV Science Team applied to the second cycle of Public Surveys for VISTA as the VVV eXtended Survey –VVVX– and started observing the second semester of 2016, to be completed by 2020.
The VVV Survey is a public infrared (IR) variability survey of the Milky Way bulge and an adjacent section of the mid-plane where star formation activity is high. To complete this survey it will take 1929 hours, covering 10 9 stars within an area of 520 square degrees, including 33 known globular clusters and around 350 open clusters. The final products will be: a deep IR atlas in 5 passbands and a catalogue of 10 6 variable stars. These will produce a 3-D map of the surveyed region (unlike single-epoch surveys that only give 2-D maps) using well-understood primary distance indicators such as RR Lyrae stars. It will yield important information on the ages of the stellar populations. [ 2 ]
The observations will be combined with data from MACHO , OGLE , EROS, VST , SPITZER , HST , CHANDRA , INTEGRAL, and ALMA for a complete understanding of the variable sources in the inner Milky Way. Several important implications for the history of the Milky Way, for globular cluster evolution, for the population census of the bulge and center, and for pulsation theory would follow from this survey. [ 2 ]
|
https://en.wikipedia.org/wiki/Vista_Variables_in_the_Via_Lactea
|
VisualFEA is a finite element analysis software program for Microsoft Windows and Mac OS X . It is developed and distributed by Intuition Software, Inc. of South Korea, and used chiefly for structural and geotechnical analysis. Its strongest point is its intuitive, user-friendly design based on graphical pre- and postprocessing capabilities. It has educational features for teaching and learning structural mechanics, and finite element analysis through graphical simulation. It is widely used in college-level courses related to structural mechanics and finite element methods.
VisualFEA is a full-fledged finite element analysis program with many easy-to-use but powerful features, which can be classified largely into four parts: finite element processing, pre-processing, post-processing and educational simulation. All the functions are integrated into a single executable module, which is a characteristic of the program distinguished from other finite element analysis programs generally composed of multiple modules. The whole procedure from pre-processing to analysis, and to post-processing can be completed on the spot without launching one program after another, or without pipelining data from one program to another.
VisualFEA can solve the following types of problems.
A finite element model in VisualFEA consists of various objects: curve, primitive surface, node, element and mesh. VisualFEA has its own CAD-like capabilities of creating graphical objects without aid of external programs. VisualFEA can create structured or unstructured meshes in two- or three-dimensional space using the following mesh generation schemes.
The program has the function to save the generated mesh data in text format for use by other application programs. Other pre-processing capabilities include the following items.
VisualFEA has various functions of visualizing the numerical data generated by solving the analysis models. The most frequently used graphical representation of the data are the contour and vector images. There are many other forms of graphical representation available in VisualFEA. [ 1 ]
VisualFEA can be used as a tool for computer-aided education of structural mechanics and finite element method. The tools are operated with the user-created modeling data and their ensuing analysis results on the basis of finite element technology. They are devised to promote the understanding, and to stimulate the interest in the subjects by substantiating the conceptual principles and visually exhibiting the complex computational processes with the aid of interactive computer graphics. The topics covered by the educational functions are as follows.
VisualFEA/CBT is an educational version of the program [ 6 ] published by John Wiley and Son's Inc. as a companion program to a textbook [ 7 ] on finite element method. The program has the limitation of 3000 nodes that can be handled.
|
https://en.wikipedia.org/wiki/VisualFEA
|
Visual angle is the angle a viewed object subtends at the eye, usually stated in degrees of arc .
It also is called the object's angular size .
The diagram on the right shows an observer's eye looking at a frontal extent (the vertical arrow) that has a linear size S {\displaystyle S} , located in the distance D {\displaystyle D} from point O {\displaystyle O} .
For present purposes, point O {\displaystyle O} can represent the eye's nodal points at about the center of the lens, and also represent the center of the eye's entrance pupil that is only a few millimeters in front of the lens.
The three lines from object endpoint A {\displaystyle A} heading toward the eye indicate the bundle of light rays that pass through the cornea, pupil and lens to form an optical image of endpoint A {\displaystyle A} on the retina at point a {\displaystyle a} .
The central line of the bundle represents the chief ray .
The same holds for object point B {\displaystyle B} and its retinal image at b {\displaystyle b} .
The visual angle V {\displaystyle V} is the angle between the chief rays of A {\displaystyle A} and B {\displaystyle B} .
The visual angle V {\displaystyle V} can be measured directly using a theodolite placed at point O {\displaystyle O} .
Or, it can be calculated (in radians) using the formula, V = 2 arctan ( S 2 D ) {\displaystyle V=2\arctan \left({\frac {S}{2D}}\right)} . [ 1 ]
However, for visual angles smaller than about 10 degrees, this simpler formula provides very close approximations:
As the above sketch shows, a real image of the object is formed on the retina between points a {\displaystyle a} and b {\displaystyle b} . (See visual system ). For small angles, the size of this retinal image R {\displaystyle R} is
where n {\displaystyle n} is the distance from the nodal points to the retina, about 17 mm.
If one looks at a one-centimeter object at a distance of one meter and a two-centimeter object at a distance of two meters, both subtend the same visual angle of about 0.01 rad or 0.57°. Thus they have the same retinal image size R ≈ 0.17 mm {\displaystyle R\approx 0.17{\text{ mm}}} .
That is just a bit larger than the retinal image size for the moon, which is about 0.15 mm {\displaystyle 0.15{\text{ mm}}} , because, with moon's mean diameter S = 3474 kilometers {\displaystyle S=3474{\text{ kilometers}}} ( 2159 miles ) {\displaystyle (2159{\text{ miles}})} , and earth to moon mean distance D {\displaystyle D} averaging 383 , 000 kilometers {\displaystyle 383,000{\text{ kilometers}}} ( 238 , 000 miles {\displaystyle 238,000{\text{ miles}}} ), V ≈ 0.009 rad {\displaystyle V\approx 0.009{\text{ rad}}} ≈ 0.52 deg {\displaystyle \approx 0.52{\text{ deg}}} .
Also, for some easy observations, if one holds one's index finger at arm's length, the width of the index fingernail subtends approximately one degree, and the width of the thumb at the first joint subtends approximately two degrees. [ 2 ]
Therefore, if one is interested in the performance of the eye or the first processing steps in the visual cortex , it does not make sense to refer to the absolute size of a viewed object (its linear size S {\displaystyle S} ). What matters is the visual angle V {\displaystyle V} which determines the size of the retinal image.
In astronomy the term apparent size refers to the physical angle V {\displaystyle V} or angular diameter .
But in psychophysics and experimental psychology the adjective "apparent" refers to a person's subjective experience.
So, "apparent size" has referred to how large an object looks, also often called its "perceived size".
Additional confusion has occurred because there are two qualitatively different "size" experiences for a viewed object. [ 3 ] One is the perceived visual angle V ′ {\displaystyle V'} (or apparent visual angle) which is the subjective correlate of V {\displaystyle V} , also called the object's perceived or apparent angular size.
The perceived visual angle is best defined as the difference between the perceived directions of the object's endpoints from oneself. [ 4 ]
The other "size" experience is the object's perceived linear size S ′ {\displaystyle S'} (or apparent linear size) which is the subjective correlate of S {\displaystyle S} , the object's physical width or height or diameter.
Widespread use of the ambiguous terms "apparent size" and "perceived size" without specifying the units of measure has caused confusion.
The brain's primary visual cortex (area V1 or Brodmann area 17) contains a spatially isomorphic representation of the retina (see retinotopy ). Loosely speaking, it is a distorted "map" of the retina. Accordingly, the size R {\displaystyle R} of a given retinal image determines the extent of the neural activity pattern eventually generated in area V1 by the associated retinal activity pattern. Murray, Boyaci, & Kersten (2006) recently used Functional magnetic resonance imaging (fMRI) to show that an increase in a viewed target's visual angle, which increases R {\displaystyle R} , also increases the extent of the corresponding neural activity pattern in area V1.
The observers in experiment carried out by Murray and colleagues viewed a flat picture with two discs that subtended the same visual angle V {\displaystyle V} and formed retinal images of the same size R {\displaystyle R} , but the perceived angular size V ′ {\displaystyle V'} of one was about 17% larger than for the other, due to differences in the background patterns for the disks. It was shown that the areas of the activity in V1 related to the disks were of unequal size, despite the fact that the retinal images were the same size. This size difference in area V1 correlated with the 17% illusory difference between the perceived visual angles. This finding has implications for spatial illusions such as the visual angle illusion . [ 5 ]
|
https://en.wikipedia.org/wiki/Visual_angle
|
Visual calculus , invented by Mamikon Mnatsakanian (known as Mamikon), is an approach to solving a variety of integral calculus problems. [ 1 ] Many problems that would otherwise seem quite difficult yield to the method with hardly a line of calculation. Mamikon collaborated with Tom Apostol on the 2013 book New Horizons in Geometry describing the subject.
Mamikon devised his method in 1959 while an undergraduate, first applying it to a well-known geometry problem: find the area of a ring ( annulus ), given the length of a chord tangent to the inner circumference. Perhaps surprisingly, no additional information is needed; the solution does not depend on the ring's inner and outer dimensions.
The traditional approach involves algebra and application of the Pythagorean theorem . Mamikon's method, however, envisions an alternate construction of the ring: first the inner circle alone is drawn, then a constant-length tangent is made to travel along its circumference, "sweeping out" the ring as it goes.
Now if all the (constant-length) tangents used in constructing the ring are translated so that their points of tangency coincide, the result is a circular disk of known radius (and easily computed area). Indeed, since the inner circle's radius is irrelevant, one could just as well have started with a circle of radius zero (a point)—and sweeping out a ring around a circle of zero radius is indistinguishable from simply rotating a line segment about one of its endpoints and sweeping out a disk.
Mamikon's insight was to recognize the equivalence of the two constructions; and because they are equivalent, they yield equal areas. Moreover, the two starting curves need not be circular—a finding not easily proven by more traditional geometric methods. This yields Mamikon's theorem :
The area of a cycloid can be calculated by considering the area between it and an enclosing rectangle. These tangents can all be clustered to form a circle. If the circle generating the cycloid has radius r then this circle also has radius r and area π r 2 . The area of the rectangle is 2 r × 2π r = 4π r 2 . Therefore, the area of the cycloid is 3π r 2 : it is 3 times the area of the generating circle.
The tangent cluster can be seen to be a circle because the cycloid is generated by a circle and the tangent to the cycloid will be at right angle to the line from the generating point to the rolling point. Thus the tangent and the line to the contact point form a right-angled triangle in the generating circle. This means that clustered together the tangents will describe the shape of the generating circle. [ 3 ]
|
https://en.wikipedia.org/wiki/Visual_calculus
|
A visual comparison is to compare two or more things by eye. This might be done by placing them side by side; by overlaying them; by alternating an image or by presenting each image to a separate eye. [ 1 ]
Such comparisons are the first stage in a child's development of an understanding of geometry and measurement, before they move to an understanding of measuring devices such as a ruler. [ 2 ]
People with sufficient control over the parallax of their eyeballs (e.g. those who can easily view random-dot stereograms ) can hold up two paper printouts and go cross-eyed to superimpose them. This invokes deep, fast, built-in image comparison wetware (the same machinery responsible for depth perception ) and differences stand out almost immediately. This technique is good for finding edits in graphical images, or for comparing an image with a compressed version to spot artefacts. [ 3 ]
Visual comparison with a standard chart or reference is often used as a means of measuring complex phenomena such as the weather , sea states or the roughness of a river. [ 4 ] A colour chart is used for this purpose in many contexts such as chemistry , cosmetics , medical testing and photography .
Comparison by eye may also be used as a source of amusement or intelligence testing , as in the popular puzzle of spot the difference .
In policing , the technique is used for analysis of fingerprints and identity parades .
Visual comparison task can be simplified by using a computer software that automatically aligns a pair of images based on common visual features present in the two images.
A visual diff or vdiff finds differences between two files by eyeball search . The term optical diff has also been reported, and is sometimes more specifically used for the act of superimposing two nearly identical printouts on one another and holding them up to a light to spot differences. Though this method is poor for detecting omissions in the ‘rear’ file, it can also be used with printouts of graphics, a claim few diff programs can make. [ 3 ]
|
https://en.wikipedia.org/wiki/Visual_comparison
|
Visual control is a business management technique employed in many places where information is communicated by using visual signals instead of texts or other written instructions. The design is deliberate in allowing quick recognition of the information being communicated, in order to increase efficiency and clarity. These signals can be of many forms, from different coloured clothing for different teams, to focusing measures upon the size of the problem and not the size of the activity, to kanban , obeya and heijunka boxes and many other diverse examples. In The Toyota Way , it is also known as mieruka .
Visual control methods aim to increase the efficiency and effectiveness of a process by making the steps in that process more visible. The theory behind visual control is that if something is clearly visible or in plain sight, it is easy to remember and keep at the forefront of the mind. Another aspect of visual control is that everyone is given the same visual cues and so are likely to have the same vantage point.
There are many different techniques that are used to apply visual control in the workplace. Some companies use visual control as an organizational tool for materials. A clearly labeled storage board lets the employee know exactly where a tool belongs and what tools are missing from the display board. Another simple example of a common visual control is to have reminders posted on cubicle walls so that they remain in plain sight. Visual signs and signals communicate information that is needed to make effective decisions. These decisions may be safety oriented or they may give reminders as to what steps should be taken to resolve a problem. Most companies use visual controls in one degree or another, many of them not even realizing that the visual controls that they are making have a name and a function in the workplace. Whether it is recognized by the name of "visual control" or not, the fact is that replacing text or number with graphics makes a set of information easier to understand with only a glance, making it a more efficient way of communicating a message. It is also commonly used for internal team communication. [ 1 ]
Visual controls are designed to make the control and management of a company as simple as possible. This entails making problems, abnormalities, or deviations from standards visible to everyone. When these deviations are visible and apparent to all, corrective action can be taken to immediately correct these problems.
Visual controls are meant to display the operating or progress status of a given operation in an easy to see format and also to provide instruction and to convey information. A visual control system must have an action component associated with it in the event that the visually represented procedures are not being followed in the real production process. Therefore, visual controls must also have a component where immediate feedback is provided to workers.
There are two groups and seven types of application in visual controls. Displays group and controls group.
|
https://en.wikipedia.org/wiki/Visual_control
|
The visual cycle is a process in the retina that replenishes the molecule retinal for its use in vision . Retinal is the chromophore of most visual opsins , meaning it captures the photons to begin the phototransduction cascade . When the photon is absorbed, the 11-cis retinal photoisomerizes into all-trans retinal as it is ejected from the opsin protein. Each molecule of retinal must travel from the photoreceptor cell to the RPE and back in order to be refreshed and combined with another opsin. This closed enzymatic pathway of 11-cis retinal is sometimes called Wald's visual cycle after George Wald (1906–1997), who received the Nobel Prize in 1967 for his work towards its discovery.
Retinal is a chromophore that forms photosensitive Retinylidene proteins when covalently bound to proteins called opsins . Retinal can be photoisomerized by itself, but requires to be bound to an opsin protein to both trigger the phototransduction cascade and tune the spectral sensitivity to longer wavelengths, which enable color vision .
Retinal is a species of retinoid and the aldehyde form of Vitamin A . Retinal is interconvertible with retinol , the transport and storage form of vitamin A. During the visual cycle, retinal moves between several different isomers and is also converted to retinol and retinyl ester . Retinoids can be derived from the oxidation of carotenoids like beta carotene or can be consumed directly. To reach the retina, it is bound to Retinol Binding Protein (RBP) and Transthyretin , which prevents its filtration in the glomeruli .
As in transport via the RBP-Transthyretin pathway, retinoids must always be bound to Chaperone molecules , for several reasons. Retinoids are toxic, insoluble in aqueous solutions, and prone to oxidation, and as such they must be bound and protected when within the body. The body uses a variety of chaperones, particularly in the retina, to transport retinoids.
The visual cycle is consistent within mammals , and is summarized as follows:
Steps 3, 4, 5, and 6 occur in rod cell outer segments ; Steps 1, 2, and 7 occur in retinal pigment epithelium (RPE) cells.
When a photon is absorbed, 11- cis -retinal is transformed to all- trans -retinal, and it moves to the exit site of rhodopsin . It will not leave the opsin protein until another fresh chromophore comes to replace it, except for in the ABCR pathway. Whilst still bound to the opsin, all- trans -retinal is transformed into all- trans -retinol by all- trans -Retinol Dehydrogenase. It then proceeds to the cell membrane of the rod, where it is chaperoned to the Retinal Pigment Epithelium (RPE) by Interphotoreceptor retinoid-binding protein (IRBP). It then enters the RPE cells, and is transferred to the Cellular Retinol Binding Protein (CRBP) chaperone.
When inside the RPE cell, bound to CRBP, the all- trans -retinol is esterified by Lecithin Retinol Acyltransferase (LRAT) to form a retinyl ester. The retinyl esters of the RPE are chaperoned by a protein known as RPE65 . It is in this form that the RPE stores most of its retinoids, as the RPE stores 2-3 times more retinoids than the neural retina itself. When further chromophore is required, the retinyl esters are acted on by isomerohydrolase to produce 11- cis -retinol, which is transferred to the Cellular retinaldehyde binding protein (CRALBP). 11- cis -Retinol is transformed into 11-cis retinal by 11-cis-retinol dehydrogenase , then it is shipped back to the photoreceptor cells via IRBP . There, it replaces the spent chromophore in opsin molecules, rendering the opsin photosensitive.
Under normal circumstances, the spent chromophore is discharged from the protein by an incoming "recharged" chromophore. However, sometimes the spent chromophore may leave the opsin protein prior to its replacement, when it is bound to the ABCA4 protein (also known as ABCR). At this stage, it is also transformed to all- trans -retinol, and then leaves the photoreceptor outer segment via the IRBP chaperone. It then follows the conventional visual cycle. It is from this pathway that the presence of opsin without a chromophore can be explained.
The visual cycle can be regulated by the retinal G-protein-coupled Receptor (RGR-opsin) system. When light activates the RGR-opsin, the recycling of chromophore in the RPE is accelerated. This mechanism provides additional chromophore after intense bleaches, and can be seen as an important mechanism in the early phases of dark adaptation and chromophore replenishment.
It is believed that an alternative visual cycle exists, which uses Müller glial cells instead of Retinal Pigment Epithelium . In this pathway, cones reduce all-trans retinal to all-trans retinol via all-trans Retinol Dehydrogenase, then transport all-trans retinol to Müller cells. There, it is transformed into 11-cis retinol by all-trans retinol isomerase, and can either be stored as retinyl esters within Müller cells, or transported back to the cone photoreceptors, where it is transformed from 11-cis retinol to 11-cis retinal by 11-cis Retinal Dehydrogenase. This pathway helps explain the rapid dark adaptation in the cone system, and the presence of 11-cis Retinal Dehydrogenase in cone photoreceptors, as it is not found in rods, only in the RPE. [ 3 ]
Melanopsin is a visual opsin present in Intrinsically photosensitive retinal ganglion cell (ipRGC) also with a retinal chromaphore. However, unlike the rod and cone pigments, melanopsin has the ability to act as both the excitable photopigment and as a photoisomerase . Melanopsin is therefore able to isomerize all-trans- retinal into 11-cis- retinal itself when stimulated with another photon. An ipRGC therefore does not rely on Müller cells and/or retinal pigment epithelium cells for this conversion. [ 4 ]
A possible mechanism for Leber's congenital amaurosis has been proposed as the deficiency of RPE65 . Without the RPE65 protein, the RPE is unable to store retinyl esters, and the visual cycle is therefore interrupted. At the beginning stages of the disease, the cone cells are unaffected, as they can rely on the alternate Muller cell visual cycle. However, rods do not have access to this alternative and are rendered inert. LCA therefore manifests as nyctalopia (night blindness). In the later stages of the disease, general retinopathy is observed as the rod cells lose their ability to signal. As a result, the rods continually secrete glutamate , a neurotransmitter, at a rate the Muller cells are unable to absorb. The glutamate levels will build up within the retina, where they will reach neurotoxic levels. The RPE65 deficiency would be genetic in origin, and is only one of many proposed possible pathophysiologies of the disease. However, there is a retinal gene therapy to reintroduce normal RPE65 genes that has been approved by the FDA since 2017. [ 5 ]
|
https://en.wikipedia.org/wiki/Visual_cycle
|
A visual hull is a geometric entity created by shape-from-silhouette 3D reconstruction technique introduced by A. Laurentini. This technique assumes the foreground object in an image can be separated from the background . Under this assumption, the original image can be thresholded into a foreground/background binary image, which we call a silhouette image. The foreground mask, known as a silhouette, is the 2D projection of the corresponding 3D foreground object. Along with the camera viewing parameters, the silhouette defines a back-projected generalized cone that contains the actual object. This cone is called a silhouette cone . The upper right thumbnail shows two such cones produced from two silhouette images taken from different viewpoints. The intersection of the two cones is called a visual hull, [ 1 ] which is a bounding geometry of the actual 3D object (see the bottom right thumbnail). When the reconstructed geometry is only used for rendering from a different viewpoint, the implicit reconstruction together with rendering can be done using graphics hardware. [ 2 ]
A technique used in some modern touchscreen devices employs cameras placed in the corners situated opposite infrared LEDs. The one-dimensional projection (shadow) of objects on the surface may be used to reconstruct the convex hull of the object. [ citation needed ]
Visual hull generation method has also been used within experimental tele-meeting systems [ 3 ] that aim to allow a user in a remote location to interact with virtual objects. The method uses multiple cameras to capture the real-world movements and interactions of the "sender", employing hardware-accelerated volumetric visual hull representation to create 3D volume from 2D multi-view images. Its ultimate aim is to allow 3D collaboration between the two users in the virtual realm, with the visual hull technique reducing the computational power required to allow this type of interaction and enabling the use of consumer goods such as the Wii Remote as a tool for interaction.
|
https://en.wikipedia.org/wiki/Visual_hull
|
Visual phototransduction is the sensory transduction process of the visual system by which light is detected by photoreceptor cells ( rods and cones ) in the vertebrate retina . A photon is absorbed by a retinal chromophore (each bound to an opsin ), which initiates a signal cascade through several intermediate cells, then through the retinal ganglion cells (RGCs) comprising the optic nerve.
Light enters the eye, passes through the optical media, then the inner neural layers of the retina before finally reaching the photoreceptor cells in the outer layer of the retina. The light may be absorbed by a chromophore bound to an opsin , which photoisomerizes the chromophore, initiating both the visual cycle , which "resets" the chromophore, and the phototransduction cascade, which transmits the visual signal to the brain. The cascade begins with graded polarisation (an analog signal ) of the excited photoreceptor cell, as its membrane potential increases from a resting potential of -70 mV, proportional to the light intensity. At rest, the photoreceptor cells are continually releasing glutamate at the synaptic terminal to maintain the potential. [ 1 ] The transmitter release rate is lowered ( hyperpolarization ) as light intensity increases. Each synaptic terminal makes up to 500 contacts with horizontal cells and bipolar cells . [ 1 ] These intermediate cells (along with amacrine cells ) perform comparisons of photoreceptor signals within a receptive field , but their precise functionalities are not well understood. The signal remains as a graded polarization in all cells until it reaches the RGCs , where it is converted to an action potential and transmitted to the brain. [ 1 ]
The photoreceptor cells involved in vertebrate vision are the rods , the cones , and the photosensitive ganglion cells (ipRGCs). These cells contain a chromophore ( 11- cis -retinal , the aldehyde of vitamin A1 and light-absorbing portion) that is bound to a cell membrane protein, opsin . Rods are responsible for vision under low light intensity and contrast detections. Because they all have the same response across frequencies, no color information can be deduced from the rods only, as in low light conditions for example. Cones, on the other hand, are of different kinds with different frequency response, such that color can be perceived through comparison of the outputs of different kinds of cones. Each cone type responds best to certain wavelengths , or colors, of light because each type has a slightly different opsin. The three types of cones are L-cones, M-cones and S-cones that respond optimally to long wavelengths (reddish color), medium wavelengths (greenish color), and short wavelengths (bluish color) respectively. Humans have trichromatic photopic vision consisting of three opponent process channels that enable color vision . [ 2 ] Rod photoreceptors are the most common cell type in the retina and develop quite late. Most cells become postmitotic before birth, but differentiation occurs after birth. In the first week after birth, cells mature and the eye becomes fully functional at the time of opening. The visual pigment rhodopsin (rho) is the first known sign of differentiation in rods. [ 3 ]
To understand the photoreceptor's behavior to light intensities, it is necessary to understand the roles of different currents.
There is an ongoing outward potassium current through nongated K + -selective channels. This outward current tends to hyperpolarize the photoreceptor at around −70 mV (the equilibrium potential for K + ).
There is also an inward sodium current carried by cGMP -gated sodium channels . This " dark current " depolarizes the cell to around −40 mV. This is significantly more depolarized than most other neurons.
A high density of Na + -K + pumps enables the photoreceptor to maintain a steady intracellular concentration of Na + and K + .
When light intensity increases, the potential of the membrane decreases (hyperpolarization). Because as the intensity increases, the release of the stimulating neurotransmitter glutamate of the photoreceptors is reduced. When light intensity decreases, that is, in the dark environment, glutamate release by photoreceptors increases. This increases the membrane potential and produces membrane depolarization. [ 1 ]
Photoreceptor cells are unusual cells in that they depolarize in response to absence of stimuli or scotopic conditions (darkness). In photopic conditions (light), photoreceptors hyperpolarize to a potential of −60 mV.
In the dark, cGMP levels are high and keep cGMP-gated sodium channels open allowing a steady inward current, called the dark current. This dark current keeps the cell depolarized at about −40 mV, leading to glutamate release which inhibits excitation of neurons.
The depolarization of the cell membrane in scotopic conditions opens voltage-gated calcium channels. An increased intracellular concentration of Ca 2+ causes vesicles containing glutamate, a neurotransmitter , to merge with the cell membrane, therefore releasing glutamate into the synaptic cleft , an area between the end of one cell and the beginning of another neuron . Glutamate, though usually excitatory, functions here as an inhibitory neurotransmitter.
In the cone pathway, glutamate:
In summary: Light closes cGMP-gated sodium channels, reducing the influx of both Na + and Ca 2+ ions. Stopping the influx of Na + ions effectively switches off the dark current. Reducing this dark current causes the photoreceptor to hyperpolarise , which reduces glutamate release which thus reduces the inhibition of retinal nerves, leading to excitation of these nerves. This reduced Ca 2+ influx during phototransduction enables deactivation and recovery from phototransduction, as discussed below in § Deactivation of the phototransduction cascade .
In light, low cGMP levels close Na + and Ca 2+ channels, reducing intracellular Na + and Ca 2+ .
During recovery ( dark adaptation ), the low Ca 2+ levels induce recovery (termination of the phototransduction cascade), as follows:
In more detail:
GTPase Accelerating Protein (GAP) of RGS (regulators of G protein signaling) interacts with the alpha subunit of transducin, and causes it to hydrolyse its bound GTP to GDP, and thus halts the action of phosphodiesterase, stopping the transformation of cGMP to GMP. This deactivation step of the phototransduction cascade (the deactivation of the G protein transducer) was found to be the rate limiting step in the deactivation of the phototransduction cascade. [ 7 ]
In other words: Guanylate Cyclase Activating Protein (GCAP) is a calcium binding protein, and as the calcium levels in the cell have decreased, GCAP dissociates from its bound calcium ions, and interacts with Guanylate Cyclase, activating it. Guanylate Cyclase then proceeds to transform GTP to cGMP, replenishing the cell's cGMP levels and thus reopening the sodium channels that were closed during phototransduction.
Finally, Metarhodopsin II is deactivated. Recoverin, another calcium binding protein, is normally bound to Rhodopsin Kinase when calcium is present. When the calcium levels fall during phototransduction, the calcium dissociates from recoverin, and rhodopsin kinase is released and phosphorylates metarhodopsin II , which decreases its affinity for transducin. Finally, arrestin, another protein, binds the phosphorylated metarhodopsin II, completely deactivating it. Thus, finally, phototransduction is deactivated, and the dark current and glutamate release is restored. It is this pathway, where Metarhodopsin II is phosphorylated and bound to arrestin and thus deactivated, which is thought to be responsible for the S2 component of dark adaptation. The S2 component represents a linear section of the dark adaptation function present at the beginning of dark adaptation for all bleaching intensities.
The visual cycle occurs via G-protein coupled receptors called retinylidene proteins which consists of a visual opsin and a chromophore 11- cis -retinal . The 11- cis -retinal is covalently linked to the opsin receptor via Schiff base . When it absorbs a photon , 11- cis -retinal undergoes photoisomerization to all- trans -retinal , which changes the conformation of the opsin GPCR leading to signal transduction cascades which causes closure of cyclic GMP-gated cation channel, and hyperpolarization of the photoreceptor cell. Following photoisomerization, all- trans -retinal is released from the opsin protein and reduced to all- trans - retinol , which travels to the retinal pigment epithelium to be "recharged". It is first esterified by lecithin retinol acyltransferase (LRAT) and then converted to 11- cis -retinol by the isomerohydrolase RPE65 . The isomerase activity of RPE65 has been shown; it is uncertain whether it also acts as the hydrolase. [ 8 ] Finally, it is oxidized to 11- cis -retinal before traveling back to the photoreceptor cell outer segment where it is again conjugated to an opsin to form new, functional visual pigment ( retinylidene protein ), namely photopsin or rhodopsin .
Visual phototransduction in invertebrates like the fruit fly differs from that of vertebrates, described up to now. The primary basis of invertebrate phototransduction is the PI(4,5)P 2 cycle . Here, light induces the conformational change into rhodopsin and converts it into meta-rhodopsin. This helps in dissociation of G-protein complex. Alpha sub-unit of this complex activates the PLC enzyme (PLC-beta) which hydrolyze the PIP2 into DAG . This hydrolysis leads to opening of TRP channels and influx of calcium. [ citation needed ]
Invertebrate photoreceptor cells differ morphologically and physiologically from their vertebrate counterparts. Visual stimulation in vertebrates causes a hyperpolarization (weakening) of the photoreceptor membrane potential, whereas invertebrates experience a depolarization with light intensity. Single-photon events produced under identical conditions in invertebrates differ from vertebrates in time course and size. Likewise, multi-photon events are longer than single-photon responses in invertebrates. However, in vertebrates, the multi-photon response is similar to the single-photon response. Both phyla have light adaptation and single-photon events are smaller and faster. Calcium plays an important role in this adaptation. Light adaptation in vertebrates is primarily attributable to calcium feedback, but in invertebrates cyclic AMP is another control on dark adaptation. [ 9 ] [ verification needed ]
|
https://en.wikipedia.org/wiki/Visual_phototransduction
|
A visual prosthesis , often referred to as a bionic eye , is a visual device intended to restore functional vision in those with partial or total blindness . Many devices have been developed, usually modeled on the cochlear implant or bionic ear devices, a type of neural prosthesis in use since the mid-1980s. The idea of using electrical current (e.g., electrically stimulating the retina or the visual cortex ) to provide sight dates back to the 18th century, discussed by Benjamin Franklin , [ 1 ] Tiberius Cavallo , [ 2 ] and Charles LeRoy. [ 3 ]
The ability to give sight to a blind person via a bionic eye depends on the circumstances surrounding the loss of sight. For retinal prostheses, which are the most prevalent visual prosthetic under development (due to ease of access to the retina among other considerations), patients with vision loss due to degeneration of photoreceptors ( retinitis pigmentosa , choroideremia , geographic atrophy macular degeneration ) are the best candidate for treatment. Candidates for visual prosthetic implants find the procedure most successful if the optic nerve was developed prior to the onset of blindness. Persons born with blindness may lack a fully developed optical nerve , which typically develops prior to birth, [ 4 ] though neuroplasticity makes it possible for the nerve, and sight, to develop after implantation [ citation needed ] .
Visual prosthetics are being developed as a potentially valuable aid for individuals with visual degradation . Only three visual prosthetic devices have received marketing approval in the EU. [ 5 ] Argus II, co-developed at the University of Southern California (USC) Eye Institute [ 6 ] and manufactured by Second Sight Medical Products Inc., was the first device to have received marketing approval (CE Mark in Europe in 2011). Most other efforts remain investigational; the Retina Implant AG's Alpha IMS won a CE Mark July 2013 and is a significant improvement in resolution. It is not, however, FDA-approved in the US. [ 7 ]
Mark Humayun, who joined the faculty of the Keck School of Medicine of USC Department of Ophthalmology in 2001; [ 8 ] Eugene Dejuan, now at the University of California San Francisco ; engineer Howard D. Phillips; bio-electronics engineer Wentai Liu, now at University of California Los Angeles ; and Robert Greenberg, now of Second Sight, were the original inventors of the active epi-retinal prosthesis [ 9 ] and demonstrated proof of principle in acute patient investigations at Johns Hopkins University in the early 1990s. In the late 1990s the company Second Sight [ 10 ] was formed by Greenberg along with medical device entrepreneur, Alfred E. Mann , [ 11 ] : 35 Their first-generation implant had 16 electrodes and was implanted in six subjects by Humayun at University of Southern California between 2002 and 2004. [ 11 ] : 35 [ 12 ] In 2007, the company began a trial of its second-generation, 60-electrode implant, dubbed the Argus II, in the US and in Europe. [ 13 ] [ 14 ] In total 30 subjects participated in the studies spanning 10 sites in four countries. In the spring of 2011, based on the results of the clinical study which were published in 2012, [ 15 ] Argus II was approved for commercial use in Europe, and Second Sight launched the product later that same year. The Argus II was approved by the United States FDA on 14 February 2013. Three US government funding agencies (National Eye Institute, Department of Energy, and National Science Foundation) have supported the work at Second Sight, USC, UCSC, Caltech, and other research labs. [ 16 ]
Designed by Claude Veraart at the University of Louvain in 2002, this is a spiral cuff electrode around the optic nerve at the back of the eye. It is connected to a stimulator implanted in a small depression in the skull. The stimulator receives signals from an externally worn camera, which are translated into electrical signals that stimulate the optic nerve directly. [ 17 ]
Although not truly an active prosthesis, an implantable miniature telescope is one type of visual implant that has met with some success in the treatment of end-stage age-related macular degeneration . [ 18 ] [ 19 ] [ 20 ] This type of device is implanted in the eye 's posterior chamber and works by increasing (by about three times) the size of the image projected onto the retina in order to overcome a centrally located scotoma or blind spot. [ 19 ] [ 20 ]
Created by VisionCare Ophthalmic Technologies in conjunction with the CentraSight Treatment Program in 2011, the telescope is about the size of a pea and is implanted behind the iris of one eye. Images are projected onto healthy areas of the central retina, outside the degenerated macula , and is enlarged to reduce the effect the blind spot has on central vision. 2.2x or 2.7x magnification strengths make it possible to see or discern the central vision object of interest while the other eye is used for peripheral vision because the eye that has the implant will have limited peripheral vision as a side effect. Unlike a telescope which would be hand-held, the implant moves with the eye which is the main advantage. Patients using the device may however still need glasses for optimal vision and for close work. Before surgery, patients should first try out a hand-held telescope to see if they would benefit from image enlargement. One of the main drawbacks is that it cannot be used for patients who have had cataract surgery as the intraocular lens would obstruct insertion of the telescope. It also requires a large incision in the cornea to insert. [ 21 ]
A Cochrane systematic review seeking to evaluate the effectiveness and safety of the implantable miniature telescope for patients with late or advanced age-related macular degeneration found only one ongoing study evaluating the OriLens intraocular telescope, with results expected in 2020. [ 22 ]
A Southern German team led by the University Eye Hospital in Tübingen, was formed in 1995 by Eberhart Zrenner to develop a subretinal prosthesis.
The chip is located behind the retina and utilizes microphotodiode arrays (MPDA) which collect incident light and transform it into electrical current stimulating the retinal ganglion cells . As natural photoreceptors are far more efficient than photodiodes , visible light is not powerful enough to stimulate the MPDA. Therefore, an external power supply is used to enhance the stimulation current. The German team commenced in vivo experiments in 2000, when evoked cortical potentials were measured from Yucatán micropigs and rabbits. At 14 months post implantation, the implant and retina surrounding it were examined and there were no noticeable changes to anatomical integrity. The implants were successful in producing evoked cortical potentials in half of the animals tested. The thresholds identified in this study were similar to those required in epiretinal stimulation. Later reports from this group concern the results of a clinical pilot study on 11 participants with retinitis pigmentosa . Some blind patients were able to read letters, recognize unknown objects, localize a plate, a cup and cutlery. [ 23 ] Two of the patients were found to make microsaccades similar to those of healthy control participants, and the properties of the eye movements depended on the stimuli that the patients were viewing—suggesting that eye movements might be useful measures for evaluating vision restored by implants. [ 24 ] [ 25 ] Multicenter study started in 2010, using a fully implantable device with 1500 Electrodes Alpha IMS (produced by Retina Implant AG, Reutlingen, Germany), with 10 patients included; preliminary results were presented at ARVO 2011. [ citation needed ] The first UK implantations took place in March 2012 and were led by Robert MacLaren at the University of Oxford and Tim Jackson at King's College Hospital in London. [ 26 ] [ 27 ] David Wong also implanted the Tübingen device in a patient in Hong Kong . [ 28 ]
On 19 March 2019 Retina Implant AG discontinued business activities quoting innovation-hostile climate of Europe's rigid regulatory systems and unsatisfactory results in patients. [ 29 ] [ 30 ]
Joseph Rizzo and John Wyatt at the Massachusetts Eye and Ear Infirmary and MIT began researching the feasibility of a retinal prosthesis in 1989, and performed a number of proof-of-concept epiretinal stimulation trials on blind volunteers between 1998 and 2000. They have since developed a subretinal stimulator, an array of electrodes, that is placed beneath the retina in the subretinal space and receives image signals beamed from a camera mounted on a pair of glasses. The stimulator chip decodes the picture information beamed from the camera and stimulates retinal ganglion cells accordingly. Their second generation prosthesis collects data and sends it to the implant through radio frequency fields from transmitter coils that are mounted on the glasses. A secondary receiver coil is sutured around the iris. [ 31 ]
The brothers Alan and Vincent Chow developed a microchip in 2002 containing 3500 photodiodes, which detect light and convert it into electrical impulses, which stimulate healthy retinal ganglion cells . The ASR requires no externally worn devices. [ 17 ]
The original Optobionics Corp. stopped operations, but Chow acquired the Optobionics name, the ASR implants and plans to reorganize a new company under the same name. [ 32 ] The ASR microchip is a 2mm in diameter silicon chip (same concept as computer chips) containing ~5,000 microscopic solar cells called "microphotodiodes" that each have their own stimulating electrode. [ 32 ]
Daniel Palanker and his group at Stanford University developed a photovoltaic retinal prosthesis in 2012, [ 33 ] that includes a subretinal photodiode array and an infrared image projection system mounted on video goggles. Images captured by video camera are processed in a pocket PC and displayed on video goggles using pulsed near-infrared (IR, 880–915 nm) light. These images are projected onto the retina via natural eye optics, and photodiodes in the subretinal implant convert light into pulsed bi-phasic electric current in each pixel. [ 34 ] Electric current flowing through the tissue between the active and return electrode in each pixel stimulates the nearby inner retinal neurons, primarily the bipolar cells, which transmit excitatory responses to the retinal ganglion cells.
This technology is being commercialized by Pixium Vision ( PRIMA Archived 23 October 2018 at the Wayback Machine ), and is being evaluated in a clinical trial (2018).
Following this proof of concept, Palanker group is focusing now on developing pixels smaller than 50μm using 3-D electrodes and utilizing the effect of retinal migration into voids in the subretinal implant.
Bionic Vision Technologies (BVT) is a company, that has taken over the research and commercialisation rights of Bionic Vision Australia (BVA). BVA was a consortium of some of Australia's leading universities and research institutes, and funded by the Australian Research Council from 2010, it ceased operations on 31 December 2016. The members of the consortium consisted of Bionics Institute , UNSW Sydney , Data 61 CSRIO , Center for Eye Research Australia (CERA), and The University of Melbourne . There were many more partners as well. The Australian Federal Government awarded a $42 million ARC grant to Bionic Vision Australia to develop bionic vision technology. [ 35 ]
While the BVA consortium was still together, the team was led by Professor Anthony Burkitt, and they were developing two retinal prostheses. One known as The Wide-View device, that combined novel technologies with materials that had been successfully used in other clinical implants. This approach incorporated a microchip with 98 stimulating electrodes and aimed to provide increased mobility for patients to help them move safely in their environment. This implant would be placed in the suprachoroidal space. Researchers expected the first patient tests to begin with this device in 2013, it is currently unknown whether full trials were conducted, but at least one woman named Dianne Ashworth was implanted with the device, and was able to read letters and numbers using it., [ 36 ] she later went on to write a book titled "I Spy with My Bionic Eye", about her life, vision loss, and being the first person to be implanted with the BVA, Bionic Eye device.
BVA was also concurrently developing the High-Acuity device, which incorporated a number of new technologies to bring together a microchip and an implant with 1024 electrodes. The device aimed to provide functional central vision to assist with tasks such as face recognition and reading large print. This high-acuity implant would be inserted epiretinally. Patient tests were planned for this device in 2014 once preclinical testing had been completed, it is unknown whether these trials ever took place.
Patients with retinitis pigmentosa were to be the first to participate in the studies, followed by age-related macular degeneration. Each prototype consisted of a camera, attached to a pair of glasses which sent the signal to the implanted microchip, where it was converted into electrical impulses to stimulate the remaining healthy neurons in the retina. This information was then passed on to the optic nerve and the vision processing centres of the brain.
On 2 January 2019, BVT released positive results from a set of trials on four Australians using a new version of the device. Older versions of the device were only designed to be used temporarily, but the new design allowed the technology to be used constantly, and for the first time outside the lab, even to be taken home. More implants are to be administered throughout 2019. [ 37 ]
According to fact sheets dated March, 2019, on BVT's website, they expect the device to obtain market approval in 3 to 5 years. [ 38 ]
Similar in function to the Harvard/MIT device, except the stimulator chip sits in the primary visual cortex , rather than on the retina. Many subjects have been implanted with a high success rate and limited negative effects. The project first began in 2002 and was still in the developmental phase, upon the death of Dobelle, selling the eye for profit was ruled against [ by whom? ] in favor of donating it to a publicly funded research team. [ 17 ] [ 39 ]
The Laboratory of Neural Prosthetics at Illinois Institute of Technology (IIT), Chicago, started developing a visual prosthetic using intracortical electrode arrays in 2009. While similar in principle to the Dobelle system, the use of intracortical electrodes allow for greatly increased spatial resolution in the stimulation signals (more electrodes per unit area). In addition, a wireless telemetry system is being developed [ 40 ] to eliminate the need for transcranial wires. Arrays of activated iridium oxide film (AIROF)-coated electrodes will be implanted in the visual cortex, located on the occipital lobe of the brain. External hardware will capture images, process them, and generate instructions which will then be transmitted to implanted circuitry via a telemetry link. The circuitry will decode the instructions and stimulate the electrodes, in turn stimulating the visual cortex. The group is developing a wearable external image capture and processing system to accompany the implanted circuitry. Studies on animals and psychophysical studies on humans are being conducted [ 41 ] [ 42 ] to test the feasibility of a human volunteer implant. [ citation needed ]
Stephen Macknik and Susana Martinez-Conde at SUNY Downstate Medical Center are also developing an intracortical visual prosthetic, called OBServe. [ 43 ] [ 44 ] The planned system will use an LED array, a video camera, optogenetics, adeno-associated virus transfection, and eye tracking. [ 45 ] Components are currently being developed and tested in animals. [ 45 ]
|
https://en.wikipedia.org/wiki/Visual_prosthesis
|
Visual reasoning is the process of manipulating one's mental image of an object in order to reach a certain conclusion – for example, mentally constructing a piece of machinery to experiment with different mechanisms. In a frequently cited paper in the journal Science [ 1 ] and a later book, [ 2 ] Eugene S. Ferguson , a mechanical engineer and historian of technology, claims that visual reasoning is a widely used tool used in creating technological artefacts. There is ample evidence that visual methods, particularly drawing, play a central role in creating artefacts. Ferguson's visual reasoning also has parallels in philosopher David Gooding 's [ 3 ] argument that experimental scientists work with a combination of action, instruments, objects and procedures as well as words. That is, with a significant non-verbal component.
Ferguson argues that non-verbal reasoning does not get much attention in areas like history of technology and philosophy of science because the people involved are verbal rather than visual thinkers .
Those who use visual reasoning, notably architects, designers, engineers, and certain mathematicians conceive and manipulate objects in "the mind's eye" before putting them on paper. Having done this the paper or computer versions (in CAD ) can be manipulated by metaphorically "building" the object on paper (or computer) before building it physically.
Nicola Tesla claimed that the first alternating current motor he built ran perfectly because he had visualized and "run" models of it in his mind before building the prototype . [ 4 ]
|
https://en.wikipedia.org/wiki/Visual_reasoning
|
Reverse image search is a content-based image retrieval (CBIR) query technique that involves providing the CBIR system with a sample image that it will then base its search upon; in terms of information retrieval , the sample image is very useful. In particular, reverse image search is characterized by a lack of search terms. This effectively removes the need for a user to guess at keywords or terms that may or may not return a correct result. Reverse image search also allows users to discover content that is related to a specific sample image [ 1 ] or the popularity of an image, and to discover manipulated versions and derivative works. [ 2 ]
A visual search engine is a search engine designed to search for information on the World Wide Web through a reverse image search. Information may consist of web pages , locations, other images and other types of documents. This type of search engines is mostly used to search on the mobile Internet through an image of an unknown object (unknown search query). Examples are buildings in a foreign city. These search engines often use techniques for Content Based Image Retrieval .
A visual search engine searches images, patterns based on an algorithm which it could recognize and gives relative information based on the selective or apply pattern match technique.
Reverse image search may be used to: [ 3 ]
Commonly used reverse image search algorithms include: [ 4 ]
An image search engine is a search engine that is designed to find an image. The search can be based on keywords, a picture, or a web link to a picture. The results depend on the search criterion, such as metadata , distribution of color, shape, etc., and the search technique which the browser uses.
Two techniques currently used in image search:
Search by metadata: Image search is based on comparison of metadata associated with the image as keywords, text, etc. and it is obtained by employing a set of images sorted by relevance. The metadata associated with each image can reference the title of the image, format, color, etc. and can be generated manually or automatically. This metadata generation process is called audiovisual indexing.
Search by example: In this technique, also called reverse image search , the search results are obtained through the comparison between images using content-based image retrieval computer vision techniques. During the search the content of the image is examined, such as color, shape, texture or any visual information that can be extracted from the image. This system requires a higher computational complexity , but is more efficient and reliable than search by metadata.
There are image searchers that combine both search techniques. For example, the first search is done by entering a text. The images obtained are then used to refine the search.
A video search engine is a search engine designed to search video on the net. Some video searchers process the search directly in the Internet, while others shelter the videos from which the search is done. Some searchers also enable to use as search parameters the format or the length of the video. Usually the results come with a miniature capture of the video.
Currently, almost all video searchers are based on keywords (search by metadata) to perform searches. These keywords can be found in the title of the video, text accompanying the video or can be defined by the author. An example of this type of search is YouTube .
A searcher of 3D models aims to find the file of a 3D modeling object from a database or network. At first glance the implementation of this type of searchers may seem unnecessary, but due to the continuous documentary inflation of the Internet, every day it becomes more necessary indexing information.
These have been used with traditional text-based searchers (keywords / tags), where the authors of the indexed material, or Internet users, have contributed these tags or keywords. Because it is not always effective, it has recently been investigated in the implementation of search engines that combine the search using text with the search compared to 2D drawings, 3D drawings and 3D models.
Princeton University has developed a search engine that combines all these parameters to perform the search, thus increasing the efficiency of search. [ 6 ]
A mobile image searcher is a type of search engine designed exclusively for mobile phones, through which you can find any information on Internet , through an image made with the own mobile phone or using certain words ( keywords ). Mobile Visual Search solutions enable you to integrate image recognition software capabilities into your own branded mobile applications. Mobile Visual Search (MVS) bridges the gap between online and offline media, enabling you to link your customers to digital content .
Mobile phones have evolved into powerful image and video processing devices equipped with high-resolution cameras, color displays, and hardware-accelerated graphics. They are also increasingly equipped with a global positioning system and connected to broadband wireless networks. All this enables a new class of applications that use the camera phone to initiate search queries about objects in visual proximity to the user (Figure 1). Such applications can be used, e.g., for identifying products, comparison shopping, finding information about movies, compact disks (CDs), real estate, print media, or artworks.
Typically, this type of search engine uses techniques of query by example or Image query by example , which use the content, shape, texture and color of the image to compare them in a database and then deliver the approximate results from the query.
The process used in these searches in the mobile phones is as follows:
First, the image is sent to the server application. Already on the server, the image will be analyzed by different analytical teams, as each one is specialized in different fields that make up an image. Then, each team will decide if the submitted image contains the fields of their speciality or not.
Once this whole procedure is done, a central computer will analyze the data and create a page of the results sorted with the efficiency of each team, to eventually be sent to the mobile phone .
Yandex Images offers a global reverse image and photo search. The site uses standard Content Based Image Retrieval (CBIR) technology used by many other sites, but additionally uses artificial intelligence-based technology to locate further results based on query. [ 7 ] Users can drag and drop images to the toolbar for the site to complete a search on the internet for similar looking images. The Yandex images searches some obscure social media sites in addition to more common ones offering content owners means of tracking plagiarism of image or photo intellectual property.
Google's Search by Image is a feature that uses reverse image search and allows users to search for related images by uploading an image or copying the image URL. Google accomplishes this by analyzing the submitted picture and constructing a mathematical model of it. It is then compared with other images in Google's databases before returning matching and similar results. When available, Google also uses metadata about the image such as description. In 2022 the feature was replaced by Google Lens as the default visual search method on Google, and the old Search by Image function remains available within Google Lens. [ 8 ]
TinEye is a search engine specialized for reverse image search. Upon submitting an image, TinEye creates a "unique and compact digital signature or fingerprint" of said image and matches it with other indexed images. [ 9 ] This procedure is able to match even very edited versions of the submitted image, but will not usually return similar images in the results. [ 10 ]
Pixsy reverse image search technology detects image matches [ 11 ] on the public internet for images uploaded to the Pixsy platform. [ 12 ] New matches are automatically detected and alerts sent to the user. For unauthorized use, Pixsy offers a compensation recovery service [ 13 ] [ 14 ] for commercial use of the image owners work. Pixsy partners with over 25 law firms and attorneys around the world to bring resolution for copyright infringement. Pixsy is the strategic image monitoring service for the Flickr platform and users. [ 15 ]
eBay ShopBot uses reverse image search to find products by a user uploaded photo. eBay uses a ResNet-50 network for category recognition, image hashes are stored in Google Bigtable ; Apache Spark jobs are operated by Google Cloud Dataproc for image hash extraction; and the image ranking service is deployed by Kubernetes . [ 16 ]
SK Planet uses reverse image search to find related fashion items on its e-commerce website. It developed the vision encoder network based on the TensorFlow inception-v3 , with speed of convergence and generalization for production usage. A recurrent neural network is used for multi-class classification, and fashion-product region-of interest detection is based on Faster R-CNN . SK Planet's reverse image search system is built in less than 100 man-months. [ 17 ]
Alibaba released the Pailitao application in 2014. Pailitao ( Chinese : 拍立淘 , literally means shopping through a camera) allows users to search for items on Alibaba's E-commercial platform by taking a photo of the query object. The Pailitao application uses a deep CNN model with branches for joint detection and feature learning to discover the detection mask and exact discriminative feature without background disturbance. GoogLeNet V1 is employed as the base model for category prediction and feature learning. [ 18 ] [ 19 ]
Pinterest acquired startup company VisualGraph in 2014 and introduced visual search on its platform. [ 20 ] In 2015, Pinterest published a paper at the ACM Conference on Knowledge Discovery and Data Mining conference and disclosed the architecture of the system. The pipeline uses Apache Hadoop , the open-source Caffe convolutional neural network framework, Cascading for batch processing, PinLater for messaging, and Apache HBase for storage. Image characteristics, including local features, deep features, salient color signatures and salient pixels are extracted from user uploads. The system is operated by Amazon EC2 , and only requires a cluster of 5 GPU instances to handle daily image uploads onto Pinterest. By using reverse image search, Pinterest is able to extract visual features from fashion objects (e.g. shoes, dress, glasses, bag, watch, pants, shorts, bikini, earrings) and offer product recommendations that look similar. [ 21 ] [ 22 ]
JD.com disclosed the design and implementation of its real time visual search system at the Middleware '18 conference . The peer reviewed paper focuses on the algorithms used by JD's distributed hierarchical image feature extraction, indexing and retrieval system, which has 300 million daily active users. The system was able to sustain 80 million updates to its database per hour when it was deployed in production in 2018. [ 23 ]
Microsoft Bing published the architecture of their reverse image searching of system at the KDD'18 conference. The paper states that a variety of features from a query image submitted by a user are used to describe its content, including using deep neural network encoders, category recognition features, face recognition features, color features and duplicate detection features. [ 24 ]
Amazon.com disclosed the architecture of a visual search engine for fashion and home products named Amazon Shop the Look in a paper published at the KDD'22 conference. The paper describes the lessons learned by Amazon when deployed in production environment, including image synthesis-based data augmentation for retrieval performance optimization and accuracy improvement. [ 25 ]
Microsoft Research Asia's Beijing Lab published a paper in the Proceedings of the IEEE on the Arista-SS (Similar Search) and the Arista-DS (Duplicate Search) systems. Arista-DS only performs duplicate search algorithms such as principal component analysis on global image features to lower computational and memory costs. Arista-DS is able to perform duplicate search on 2 billion images with 10 servers but with the trade-off of not detecting near duplicates. [ 26 ]
In 2007, the Puzzle library is released under the ISC license . Puzzle is designed to offer reverse image search visually similar images, even after the images have been resized, re-compressed, recolored and/or slightly modified. [ 27 ]
The image-match open-source project was released in 2016. The project, licensed under the Apache License , implements a reverse image search engine written in Python . [ 28 ]
Both the Puzzle library and the image-match projects use algorithms published at an IEEE ICIP conference. [ 29 ]
In 2019, a book published by O'Reilly documents how a simple reverse image search system can be built in a few hours. The book covers image feature extraction and similarity search, together with more advanced topics including scalability using GPUs and search accuracy improvement tuning. [ 30 ] The code for the system was made available freely on GitHub . [ 31 ]
The processing demands for performing reverse video search would be astoundingly high. There is no simple tool to just upload the video to find the matching results. At present there is no technology that can successfully perform a reverse video search. [ 32 ] [ 33 ]
|
https://en.wikipedia.org/wiki/Visual_search_engine
|
Visual technology is the engineering discipline dealing with visual representation.
Visual technology includes photography , printing , augmented reality , virtual reality and video . [ 1 ]
This technology-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Visual_technology
|
Visual temporal integration is a perceptual process of integrating a continuous and rapid stream of information into discrete perceptual episodes or ‘events’. Arguably, integrating over small temporal windows, as opposed to sampling ‘snapshots’, allows the brain to evaluate visual information more reliably. [ 1 ] VTI by the brain reflects an important property of the world: the closer in time two pieces of information occur, the more likely it is that they will be part of the same ‘event’. Several other factors determine the brain’s integration window. [ 2 ]
One way in which scientists are studying visual temporal integration is by investigating the differences experienced by people with unusual ways of perceiving the world, for example through schizophrenia
or autism. [ 3 ]
|
https://en.wikipedia.org/wiki/Visual_temporal_integration
|
Visual voicemail is direct-access voicemail with a visual interface. Such an interface presents a list of messages for playback, as opposed to the sequential listening required using traditional voicemail, and may include a transcript of each message. In 2007, Apple 's iPhone was the first cell phone promoting this feature.
In 2007, YouMail was the first third-party, multi-platform visual voicemail service for mobile phones, storing voicemail in the cloud rather than the mobile carrier's network, and providing access to it through any web browser or by e-mail. In 2009, YouMail was the first to then extend this to also provide this functionality with an app for the BlackBerry, iPhone, and Android platforms, and an API that allowed others to build clients for Windows Phone 7 and WebOS.
Other phone system vendors are now also offering these features for internal voicemail users. This complements the basic voicemail to e-mail or via SMS to mobile devices which is becoming ubiquitous in that it allows better management of voicemail messages without clogging up the user's inbox and saves time filtering spam.
One way to use visual voicemail is via mobile client applications. T-Mobile International launched the service as Mobilbox Pro in August 2009 for a range of Symbian S60 devices with announcement to support further phones including Windows Mobile and Android devices.
In April 2009, OMTP created a Technical Recommendation [ 1 ] for an open and standardized visual voicemail (VVM) interface protocol that VVM clients may use to interact with a voicemail server. The key functions of this interface are the support of message retrieval, message upload, VVM management, greeting management and provisioning. The document is intended to ensure that standard functionality of voicemail servers may be accessed through a range of VVM clients via the defined interface. This approach leaves scope for operators/carriers and vendors to differentiate their products.
In 2010, Google Voice became available without invitation. As a voicemail application on Google's Android platform, it can assume control of the visual voicemail functionality in place of a carrier's own application. In 2015, Google brought a native implementation of visual voicemail into Android via the Marshmallow update, by integrating it into the dialer user interface, thereby allowing compatible carrier VVM services to hook into it with minimal configuration.
Klausner Technologies Inc is a corporation based in Sagaponack, New York , [ 2 ] [ 3 ] that invented visual voice mail technology in 1994. The company filed US patent #5,572,576 in March 1994 titled "Telephone answering device linking displayed data with recorded audio message", [ 4 ] The United States Patent and Trademark Office awarded the patent on November, 5th 1996 to inventors Judah Klausner and Robert Hotto. [ 5 ]
From 2005 to 2012, Klausner Technologies Inc, whose CEO is Judah Klausner , one of the inventors listed on the '576 patent, brought dozens of lawsuits against Apple , Callware , Google , Microsoft , Oracle , MetroPCS , Digium , Schmooze Com Inc and others for patent infringement. [ 6 ] In 2008, Apple purchased a license to Klausner's visual voicemail technology. [ 7 ]
In August 2011, a patent was granted to Apple for "Voicemail manager for portable multifunction device". [ 8 ] In Apple's granted patent, Apple cited two Klausner Technology patents (US Pat #5,283,818 and US Pat #5,333,266) as being prior art . [ 8 ]
|
https://en.wikipedia.org/wiki/Visual_voicemail
|
The visual workplace is a continuous improvement paradigm that is closely related to lean manufacturing , the Toyota Production System (TPS), and operational excellence yet offers its own comprehensive methodology that aims for significant financial and cultural improvement gains. Introduced by Gwendolyn Galsworth in her 1997 book Visual Systems , [ 1 ] this system integrates and codifies the many iterations of visuality in the world of continuous improvement.
Visual communication rests on the natural inclination of humans to use pictures, graphics, and other images to quickly and simply convey meaning and understand information. For instance, look at the practices and applications that civil engineers have developed to handle complex human interaction on our roads and highways, [ 2 ] as well as the entire field of wayfinding in public spaces. [ 3 ]
The same logic eventually migrated into the workplace, notably in post-war Japan, and most saliently at Toyoda Motors where visual applications (visual devices) became a commonplace element in the Toyota Production System (TPS). [ 4 ] Other leading companies in Japan, such as Canon and Okidata, adopted many of the same practices. However, while visibility was clearly a part of Japan's success solution, it was only noticed—or cited in the literature—as a generalized principle and not a codified system or a framework of thinking. For example, Robert W. Hall, in his 1983 book, Zero Inventories, states: "Establishing visibility of all forms of production problems is very important. ... The entire idea is instant communication." [ 5 ]
Specifically, Japan's JIT (just-in-time) manufacturing approach had an easy-to-understand visual interface: andon (stacked lights), kanban (pick-up tickets for control material quantity), color-coding (to make the match between items), scheduling boards for daily production, easy-to-read labels on shelving, and lines on the floor to trace out locations. [ 6 ]
Japanese master practitioners also noted that visual devices made it easy to see the difference between normal and abnormal: "... abnormal conditions and problems need to be obvious enough to catch people's attention. Because of the emphasis on visual methods for quick information transfer, the practice is called 'management by sight' or 'visual control'." [ 7 ] Suzaki also compared the responsiveness of a well-tuned production system with the way the human body responds to stimuli and problems: "... Corrective action is taken right away, just as our muscles pull our hand away when we touch a hot plate." [ 7 ]
Michel Greif's book, The Visual Factory [ 8 ] conferred the name for the first time, though Greif's theme focused primarily on the ability of visual applications to increase the interest of hourly workers in their own performance and their participation in company improvement activities.
Throughout this period (1983–1991), Gwendolyn Galsworth was head of training and development at Productivity Inc. in Cambridge, Massachusetts, a publishing, training, and consulting firm known for bringing the work of Japan's manufacturing leaders to the United States. [ citation needed ] Galsworth headed study missions to Japan and observed visuality in Japan first-hand. She also had the opportunity of working one-on-one with many of Japan's seminal thinkers, including Taiichi Ohno , Ryuji Fukuda, and Shigeo Shingo . Shingo personally tasked Galsworth with developing his mistake proofing/poka-yoke methodology for western companies. [ 9 ] It was the synthesis of all these factors and influences that lead Galsworth to develop and codify the many threads of visuality into a coherent methodology of the visual workplace, an overall operational strategy and philosophy geared to help organizations continually achieve their goals through visual devices and systems. [ citation needed ]
Galsworth continues to be the main driving force behind the practice and articulation of workplace visuality, along with a network of individuals and companies loosely coupled as visual workplace practitioners around the world. [ citation needed ] The visual workplace is a large body of knowledge and know-how, with a strong guiding philosophy of continuous improvement with an emphasis on the centrality of the individual in the prosperity of the enterprise. [ citation needed ]
While virtually all major improvement paradigms in use in the West incorporate some element of visuality, the entire codified set of visual principles and practices, from the foundation of 5S through to visual guarantees (poka-yoke), rests on this definition: "The visual workplace is a self-ordering, self-explaining, self-regulating, and self-improving work environment—where what is supposed to happen does happen, on time, every time, day or night—because of visual devices." [ 10 ]
A visual workplace is defined by devices designed to visually share information about organizational operations in order to make human and machine performance safer, more exact, more repeatable, and more reliable. The more the process becomes visual, the more production velocity increases. [ 11 ]
This is accomplished in parallel with generating new levels of employee engagement and contribution, which in turn lead to improved alignment within the enterprise and significant bottom line benefits. In an effective visual workplace, this level of information can be seen and understood without coaching, supervision or the need for an explanation—at best, without speaking a word. [ 12 ]
The key principle is to install vital information visually as close to the point of use as possible. When a step-by-step methodology is applied, the visual workplace targets the elimination of the seventh waste, motion , defined as moving without working. [ 1 ]
Originally implemented and refined in manufacturing settings, the concept of the visual workplace is now taking hold in such wide-ranging venues as libraries [ 13 ] and hospitals. [ 14 ]
|
https://en.wikipedia.org/wiki/Visual_workplace
|
Vital Materials is a company established in 1995 that primarily refines minor metals . [ 1 ] [ 2 ] [ 3 ]
Zhu Shihui, formerly an employee of Sumitomo , founded the firm in 1995. [ 1 ] Vital Materials purchased a large amount of indium at auction formerly held by Fanya Metal Exchange in 2020. [ 3 ]
This Chinese corporation or company article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vital_Materials
|
Vital effects are biological impacts on geochemical records. Many marine organisms, ranging from zooplankton (e.g. foraminifera ) to phytoplankton (e.g diatoms ) to reef builders (e.g. coral ), create shells or skeletons from chemical compounds dissolved in seawater. This process, which is also called biomineralization , therefore records the chemical signature of seawater during the time of shell formation. However, different species have different metabolism and physiology, causing them to create their shells in different ways. These biological distinctions cause species to record slightly different chemical signatures in their shells; these differences are known as vital effects.
Vital effects are relevant to study because of their influence on paleoclimatic interpretations. Scientists study the isotopic composition of marine organisms’ shells that have been preserved in marine sediment in order to reconstruct past environmental conditions. The earliest example of this work is using oxygen isotopes from the calcium carbonate (CaCO 3 ) in belemnites to reconstruct paleotemperatures. [ 1 ] It is important to understand vital effects because they affect how paleoclimatic data are interpreted, which influences how scientists predict future impacts of climate change.
Foraminifera are widely used for paleoclimatic and paleoceanographic research because of the oxygen isotopes in their calcium carbonate shells. Oxygen isotopes, or δ 18 O , are used to interpret past temperature and ice volume. [ 2 ] Foraminifera also incorporate trace amounts of boron in their shells, which is used to reconstruct past pH . [ 3 ] Foraminiferal isotopic composition is affected by factors such as algal symbionts or species-specific physiology. [ 4 ] [ 5 ] The influence of such vital effects can be determined via culture experiments.
Similar to foraminifera shells, the isotopic composition of coral skeletons is used to reconstruct past temperature, CO 2 concentrations, and pH. [ 6 ] [ 7 ] Vital effects arise from algal symbionts and biological responses to changes in conditions such as pH. Again, culture experiments are used to quantify vital effects and calibrate the use of coral isotopic composition as a proxy. [ 8 ]
Diatoms can also be used to study oxygen isotopes and are especially useful in regions of the ocean where foraminifera do not preserve in marine sediments. One example of vital effects in diatoms is a difference in δ 18 O between two different species, Coscinodiscus marginatus and Coscinodiscus radiatus , which is attributed to their difference in size. [ 9 ]
|
https://en.wikipedia.org/wiki/Vital_effects
|
Vital rates refer to how fast vital statistics change in a population (usually measured per 1000 individuals). There are 2 categories within vital rates: crude rates and refined rates .
Crude rates measure vital statistics in a general population (overall change in births and deaths per 1000).
Refined rates measure the change in vital statistics in a specific demographic (such as age, sex, race, etc.).
The national marriage rates since 1972,in the US have fallen by almost 50% at six people per 1000. According to Iran Index and National Organization for Civil Registration of Iran Iranian divorce rate is in the red at its record highest level since 1979, divorce quotas were introduced to curb enthuitasim. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ]
This ecology -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vital_rates
|
According to the vital force theory, the conduction of water up the xylem vessel is a result of vital action of the living cells in the xylem tissue. These living cells are involved in ascent of sap. Relay pump theory and Pulsation theory support the active theory of ascent of sap.
Emil Godlewski (senior) (1884) proposed Relay pump or Clamberinh force theory (through xylem parenchyma) and Jagadish Chandra Bose (1923) proposed pulsation theory (due to pulsatory activities of innermost cortical cells just outside endodermis).
Jagadish Chandra Bose suggested a mechanism for the ascent of sap in 1927. His theory can be explained with the help of galvanometer of electric probes. He found electrical ‘pulsations’ or oscillations in electric potentials, and came to believe these were coupled with rhythmic movements in the telegraph plant Codariocalyx motorius (then Desmodium ). On the basis of this Bose theorized that regular wave-like ‘pulsations’ in cell electric potential and turgor pressure were an endogenous form of cell signaling. According to him the living cells in the inner lining of the xylem tissue pump water by contractive and expulsive movements similar to the animal heart circulating blood.
This mechanism has not been well supported, and in spite of some ongoing debate, the evidence overwhelmingly supports the cohesion-tension theory for the ascent of sap.
|
https://en.wikipedia.org/wiki/Vital_theory
|
In mathematics , a Vitali set is an elementary example of a set of real numbers that is not Lebesgue measurable , found by Giuseppe Vitali in 1905. [ 1 ] The Vitali theorem is the existence theorem that there are such sets. Each Vitali set is uncountable , and there are uncountably many Vitali sets. The proof of their existence depends on the axiom of choice .
Certain sets have a definite 'length' or 'mass'. For instance, the interval [0, 1] is deemed to have length 1; more generally, an interval [ a , b ], a ≤ b , is deemed to have length b − a . If we think of such intervals as metal rods with uniform density, they likewise have well-defined masses. The set [0, 1] ∪ [2, 3] is composed of two intervals of length one, so we take its total length to be 2. In terms of mass, we have two rods of mass 1, so the total mass is 2.
There is a natural question here: if E is an arbitrary subset of the real line, does it have a 'mass' or 'total length'? As an example, we might ask what is the mass of the set of rational numbers between 0 and 1, given that the mass of the interval [0, 1] is 1. The rationals are dense in the reals, so any value between and including 0 and 1 may appear reasonable.
However the closest generalization to mass is sigma additivity , which gives rise to the Lebesgue measure . It assigns a measure of b − a to the interval [ a , b ], but will assign a measure of 0 to the set of rational numbers because it is countable . Any set which has a well-defined Lebesgue measure is said to be "measurable", but the construction of the Lebesgue measure (for instance using Carathéodory's extension theorem ) does not make it obvious whether non-measurable sets exist. The answer to that question involves the axiom of choice .
A Vitali set is a subset V {\displaystyle V} of the interval [ 0 , 1 ] {\displaystyle [0,1]} of real numbers such that, for each real number r {\displaystyle r} , there is exactly one number v ∈ V {\displaystyle v\in V} such that v − r {\displaystyle v-r} is a rational number . Vitali sets exist because the rational numbers Q {\displaystyle \mathbb {Q} } form a normal subgroup of the real numbers R {\displaystyle \mathbb {R} } under addition , and this allows the construction of the additive quotient group R / Q {\displaystyle \mathbb {R} /\mathbb {Q} } of these two groups which is the group formed by the cosets r + Q {\displaystyle r+\mathbb {Q} } of the rational numbers as a subgroup of the real numbers under addition. This group R / Q {\displaystyle \mathbb {R} /\mathbb {Q} } consists of disjoint "shifted copies" of Q {\displaystyle \mathbb {Q} } in the sense that each element of this quotient group is a set of the form r + Q {\displaystyle r+\mathbb {Q} } for some r {\displaystyle r} in R {\displaystyle \mathbb {R} } . The uncountably many elements of R / Q {\displaystyle \mathbb {R} /\mathbb {Q} } partition R {\displaystyle \mathbb {R} } into disjoint sets, and each element is dense in R {\displaystyle \mathbb {R} } . Each element of R / Q {\displaystyle \mathbb {R} /\mathbb {Q} } intersects [ 0 , 1 ] {\displaystyle [0,1]} , and the axiom of choice guarantees the existence of a subset of [ 0 , 1 ] {\displaystyle [0,1]} containing exactly one representative out of each element of R / Q {\displaystyle \mathbb {R} /\mathbb {Q} } . A set formed this way is called a Vitali set.
Every Vitali set V {\displaystyle V} is uncountable, and v − u {\displaystyle v-u} is irrational for any u , v ∈ V , u ≠ v {\displaystyle u,v\in V,u\neq v} .
A Vitali set is non-measurable. To show this, we assume that V {\displaystyle V} is measurable and we derive a contradiction. Let q 1 , q 2 , … {\displaystyle q_{1},q_{2},\dots } be an enumeration of the rational numbers in [ − 1 , 1 ] {\displaystyle [-1,1]} (recall that the rational numbers are countable ). From the construction of V {\displaystyle V} , we can show that the translated sets V k = V + q k = { v + q k : v ∈ V } {\displaystyle V_{k}=V+q_{k}=\{v+q_{k}:v\in V\}} , k = 1 , 2 , … {\displaystyle k=1,2,\dots } are pairwise disjoint. (If not, then there exists distinct v , u ∈ V {\displaystyle v,u\in V} and k , ℓ ∈ N {\displaystyle k,\ell \in \mathbb {N} } such that v + q k = u + q ℓ ⟹ v − u = q ℓ − q k ∈ Q {\displaystyle v+q_{k}=u+q_{\ell }\implies v-u=q_{\ell }-q_{k}\in \mathbb {Q} } , a contradiction.)
Next, note that
To see the first inclusion, consider any real number r {\displaystyle r} in [ 0 , 1 ] {\displaystyle [0,1]} and let v {\displaystyle v} be the representative in V {\displaystyle V} for the equivalence class [ r ] {\displaystyle [r]} ; then r − v = q i {\displaystyle r-v=q_{i}} for some rational number q i {\displaystyle q_{i}} in [ − 1 , 1 ] {\displaystyle [-1,1]} which implies that r {\displaystyle r} is in V i {\displaystyle V_{i}} .
Apply the Lebesgue measure to these inclusions using sigma additivity :
Because the Lebesgue measure is translation invariant, λ ( V k ) = λ ( V ) {\displaystyle \lambda (V_{k})=\lambda (V)} and therefore
But this is impossible. Summing infinitely many copies of the constant λ ( V ) {\displaystyle \lambda (V)} yields either zero or infinity, according to whether the constant is zero or positive. In neither case is the sum in [ 1 , 3 ] {\displaystyle [1,3]} . So V {\displaystyle V} cannot have been measurable after all, i.e., the Lebesgue measure λ {\displaystyle \lambda } must not define any value for λ ( V ) {\displaystyle \lambda (V)} .
No Vitali set has the property of Baire . [ 2 ]
By modifying the above proof, one shows that each Vitali set has Banach measure 0. This does not create any contradictions since Banach measures are not countably additive, but only finitely additive.
The construction of Vitali sets given above uses the axiom of choice . The question arises: is the axiom of choice needed to prove the existence of sets that are not Lebesgue measurable? The answer is yes, provided that inaccessible cardinals are consistent with the most common axiomatization of set theory, so-called ZFC .
In 1964, Robert Solovay constructed a model of Zermelo–Fraenkel set theory without the axiom of choice where all sets of real numbers are Lebesgue measurable. This is known as the Solovay model . [ 3 ] In his proof, Solovay assumed that the existence of inaccessible cardinals is consistent with the other axioms of Zermelo-Fraenkel set theory, i.e. that it creates no contradictions. This assumption is widely believed to be true by set theorists, but it cannot be proven in ZFC alone. [ 4 ]
In 1980, Saharon Shelah proved that it is not possible to establish Solovay's result without his assumption on inaccessible cardinals. [ 4 ]
|
https://en.wikipedia.org/wiki/Vitali_set
|
Vitaly Lazarevich Ginzburg ForMemRS [ 1 ] (Russian: Вита́лий Ла́заревич Ги́нзбург ; 4 October [ O.S. 21 September] 1916 – 8 November 2009) was a Russian physicist who was honored with the Nobel Prize in Physics in 2003, together with Alexei Abrikosov and Anthony Leggett for their "pioneering contributions to the theory of superconductors and superfluids." [ 2 ]
He spent his career in the former Soviet Union and was one of the leading figure in former Soviet program of nuclear weapons , working towards designs of the thermonuclear devices . [ 3 ] [ 4 ] He became a member of the Russian Academy of Sciences and succeeded Igor Tamm as head of the Department of Theoretical Physics of the Lebedev Physical Institute of the Russian Academy of Sciences ( FIAN ). In his later life, Ginzburg become an outspoken atheist and was critical of clergy 's influence in Russian society. [ 5 ]
Vitaly Ginzburg was born to a Jewish family in Moscow on 4 October 1916— the son of an engineer, Lazar Yefimovich Ginzburg, and a doctor, Augusta Wildauer who was a graduate from the Physics Faculty of Moscow State University in 1938. After attending his mother's alma mater, he defended his qualifications of the candidate's ( Kandidat Nauk ) dissertation in 1940, and his comprehensive thesis for the doctor's ( Doktor Nauk ) qualification in 1942. In 1944, he became a member of the Communist Party of the Soviet Union. Among his achievements are a partially phenomenological theory of superconductivity , the Ginzburg–Landau theory , developed with Lev Landau in 1950; [ 6 ] the theory of electromagnetic wave propagation in plasmas (for example, in the ionosphere ); and a theory of the origin of cosmic radiation . He is also known to biologists as being part of the group of scientists that helped bring down the reign of the politically connected anti- Mendelian agronomist Trofim Lysenko , thus allowing modern genetic science to return to the USSR . [ 7 ]
In 1937, Ginzburg married Olga Zamsha. In 1946, he married his second wife, Nina Ginzburg ( nee Yermakova), who had spent more than a year in custody on fabricated charges of plotting to assassinate the Soviet leader Joseph Stalin . [ 8 ]
As a renowned professor and researcher, Ginzburg was an obvious candidate for the Soviet bomb project . From 1948 through 1952 Ginzburg worked under Igor Kurchatov to help with the hydrogen bomb . [ 9 ] Ginzburg and Igor Tamm both proposed ideas that would make it possible to build a hydrogen bomb. When the bomb project moved to Arzamas-16 to continue in even more secrecy, Ginzburg was not allowed to follow. Instead he stayed in Moscow and supported from afar, staying under watch due to his background and past. [ 2 ] As the work got continuously more classified, Ginzburg was phased out of the project and allowed to pursue his true passion, superconductors. During the Cold War , the thirst for knowledge and technological advancement was never-ending. This was no different with the research done on superconductors. The Soviet Union believed that the research done on superconductors would place them ahead of their American counterparts. Both sides sought to leverage the potential military applications of superconductors.
Ginzburg was the editor-in-chief of the scientific journal Uspekhi Fizicheskikh Nauk . [ 4 ] He also headed the Academic Department of Physics and Astrophysics Problems, which Ginzburg founded at the Moscow Institute of Physics and Technology in 1968. [ 10 ]
Ginzburg identified as a secular Jew, and following the collapse of communism in the former Soviet Union, he was very active in Jewish life, especially in Russia, where he served on the board of directors of the Russian Jewish Congress . He is also well known for fighting anti-Semitism and supporting the state of Israel . [ 11 ]
In the 2000s (decade), Ginzburg was politically active, supporting the Russian liberal opposition and human rights movement. [ 12 ] He defended Igor Sutyagin and Valentin Danilov against charges of espionage put forth by the authorities. On 2 April 2009, in an interview to the Radio Liberty Ginzburg denounced the FSB as an institution harmful to Russia and the ongoing expansion of its authority as a return to Stalinism . [ 13 ]
Ginzburg worked at the P. N. Lebedev Physical Institute of Soviet and Russian Academy of Sciences in Moscow since 1940. Russian Academy of Sciences is a major institution where mostly all Nobel Prize laureates of physics from Russia have done their studies and/or research works. [ 14 ]
Ginzburg was an avowed atheist, both under the militantly atheist Soviet government and in post-Communist Russia when religion made a strong revival. [ 15 ] He criticized clericalism in the press and wrote several books devoted to the questions of religion and atheism. [ 16 ] [ 17 ] Because of this, some Orthodox Christian groups denounced him and said no science award could excuse his verbal attacks on the Russian Orthodox Church . [ 18 ] He was one of the signers of the Open letter to the President Vladimir V. Putin from the Members of the Russian Academy of Sciences against clericalisation of Russia.
Vitaly Ginzburg, along with Anthony Leggett and Alexei Abrikosov were awarded the Nobel Prize in Physics in 2003 for their groundbreaking work on the theory of superconductors . [ 2 ] The Nobel Prize recognized Ginzburg's work in theoretical physics , specifically his contributions to understanding the behavior of matter at extremely low temperatures.
His collaboration with Lev Landau in 1950 led to the development of the Ginzburg-Landau theory, which became paramount to later work on superconductors. Landau had been working on superconductors for years before their partnership, with Landau publishing many papers between 1941 and 1947 on the properties of quantum fluids at extremely low temperatures. Lev Landau would later receive a Nobel Prize in 1962 for this research on the properties of the superfluid liquid helium in 1941. [ 19 ] Before their collaboration, Landau had just done research on liquid helium and other quantum fluids, but Ginzburg allowed them to go a step further.
Ginzburg introduced the concept of an order parameter, which would allow them to characterize the state of the superconductor. To do this, they derived a complex set of equations that would allow them to describe the behavior of the superconductor. [ 20 ] These equations provided a model from which researchers can understand the transition between a normal and superconducting state, as well as be able to predict various properties of other superconductors. Using these equations, they were also able to introduce the Ginzburg-Landau Parameter. This parameter used a separate set of equations in order to classify if they were looking at a Type-I or Type-II superconductor. This advancement allowed Anthony Leggett to build upon it and complete his own research on superconductors.
This research on superconductors allowed many new technological advancements to unfold, including some we can see in everyday life. The use of superconductors can be seen in MRI machines, [ 21 ] engines , and new Maglev trains .
A spokeswoman for the Russian Academy of Sciences announced that Ginzburg died in Moscow on 8 November 2009 from cardiac arrest . [ 3 ] [ 22 ] He had been suffering from ill health for several years, [ 22 ] and three years before his death said "In general, I envy believers. I am 90, and [am] being overcome by illnesses. For believers, it is easier to deal with them and with life's other hardships. But what can be done? I cannot believe in resurrection after death." [ 22 ]
Prime Minister of Russia Vladimir Putin sent his condolences to Ginzburg's family, saying "We bid farewell to an extraordinary personality whose outstanding talent, exceptional strength of character and firmness of convictions evoked true respect from his colleagues". [ 22 ] President of Russia Dmitry Medvedev , in his letter of condolences, described Ginzburg as a "top physicist of our time whose discoveries had a huge impact on the development of national and world science." [ 23 ]
Ginzburg was buried on 11 November in the Novodevichy Cemetery in Moscow, the resting place of many famous politicians, writers and scientists of Russia. [ 3 ]
The first wife (in 1937–1946) is a graduate of the Faculty of Physics of Moscow State University (1938) Olga Ivanovna Zamsha (born 1915, Yeysk ), candidate of physical and mathematical sciences (1945), associate professor at MEPhI (1949–1985), author of the “Collection of problems on general physics" (with co-authors, 1968, 1972, 1975).
The second wife (since 1946) is a graduate of the Faculty of Mechanics and Mathematics of Moscow State University, experimental physicist Nina Ivanovna Ginzburg (née Ermakova) (October 2, 1922 — May 19, 2019).
Daughter — Irina Vitalievna Dorman (born 1939), graduate of the Faculty of Physics of Moscow State University (1961), candidate of physical and mathematical sciences, historian of science (her husband is a cosmophysicist, doctor of physical and mathematical sciences Leib (Lev) Isaakovich Dorman).
Granddaughter — Victoria Lvovna Dorman, American physicist, graduate of the physics department of Moscow State University and Princeton University , deputy dean for academic affairs at the Princeton School of Engineering and Applied Science; her husband is physicist and writer Mikhail Petrov.
Great cousin — Mark Ginzburg .
|
https://en.wikipedia.org/wiki/Vitaly_Ginzburg
|
Vitaly Grigorievich Khlopin (Russian:Вита́лий Григо́рьевич Хло́пин) (January 1890 - 10 July 1950) was a Russian and Soviet scientist- radiochemist, professor, academician of the USSR Academy of Sciences (1939), Hero of Socialist Labour (1949), and director of the Radium Institute of the USSR Academy of Sciences (1939-1950). [ 1 ] [ 2 ] [ 3 ] He was one of the founders of Soviet radiochemistry and radium industry, received the first domestic radium preparations (1921), one of the founders of the Radium Institute and leading participants in the atomic project and founder of the school of Soviet radiochemists.
He was born on January 14 (26), 1890 in Perm , in the family of a doctor Grigory Vitalievich Khlopin (1863-1929). From 1905 the Khlopins lived in St. Petersburg .
Brief chronology of his life path: [ 4 ] [ 5 ] [ 6 ]
1922-1934 - Head of the gas department of NPFR, - Geochemical Institute of the USSR Academy of Sciences (Leningrad);
He died on July 10, 1950, and was buried in Leningrad, at the Tikhvin Cemetery in the Alexander Nevsky Lavra .
Khlopin was first married to Nadezhda Pavlovna Annenkova (daughter of the Narodovtsy P. S. Annenkov[clarification]).
V. G. Khlopin began his independent scientific activity as a student in 1911 - in his father's laboratory at the Clinical Institute he carried out work, the results of which were published in the article "On the formation of oxidants in the air under the action of ultraviolet rays". [ 4 ] [ 16 ]
In these studies V. G. Khlopin first proved the formation in atmospheric air under the action of ultraviolet rays not only hydrogen peroxide and ozone, but also nitrogen oxides, the latter statement began a long discussion that lasted until 1931, when D. Vorländer ( German : D. Vorländer ) proved the correctness of the observations of V. G. Khlopin. [ 4 ]
The circle of V. G. Khlopin's interests is not strictly confined to any one area. It is determined by the school, which he passed under the guidance of L. A. Chugaev and V. I. Vernadsky, respectively - in general chemistry and geochemistry, which, in turn, allowed V. G. Khlopin to develop his own scientific direction - to create the first domestic school of radiochemists.
At the initial stage of his research activity (1911-1917), V. G. Khlopin was mainly concerned with problems related to inorganic and analytical chemistry. In 1913, together with L. A. Chugaev, he worked on the synthesis of complex compounds of platonitrite with dithioethers. Of his further works, especially important are those aimed at the development of a new method for the preparation of various derivatives of univalent nickel, and the creation of a device for determining the solubility of compounds at different temperatures. [ 4 ] [ 6 ]
To the most interesting works of this period belongs the discovery of hydroxopentamine series of complex compounds of platinum made in 1915 by L. A. Chugaev and V. G. Khlopin; curiously, but methodologically, from the point of view of the theory of cognition, it is quite natural that historically it was made somewhat earlier than the discovery by L. A. Chugaev and N. A. Vladimirov of the pentamine series, later called Chugaev's salts. [ 4 ]
Two works hold a special place in this period of V. G. Khlopin's scientific work:
1. The action of hydrosulfur sodium salt on metallic selenium and tellurium, leading to the development of a convenient method of obtaining sodium telluride and selenide and a convenient synthesis of organic compounds of tellurium and selenium (1914),
2. On the action of hydrosulfurosodium salt on nickel salts in the presence of nitrous sodium salt. The work led to the synthesis of univalent nickel derivatives (1915), which were much later (in 1925) obtained in Germany by S. Mansho and co-workers by the action of carbon monoxide and nitric oxide on nickel salts. [ 4 ]
Here, at the same department, already in the First World War, on the assignment of the Chemical Committee of the Main Artillery Department, V.G. Khlopin performed his first technological work - he developed a method of obtaining pure platinum from Russian raw materials. The importance of this work was due to the sharp reduction of imports. His participation in several expeditions aimed at identifying Russia's natural resources was subordinated to the solution of the same problems. He wrote reviews on rare elements: boron, lithium, rubidium, cesium and zirconium. [ 4 ]
All of V.G. Khlopin's further scientific activity was predetermined by this meeting. In the laboratory founded by Vladimir Ivanovich Vernadsky, a systematic study of radioactive minerals and rocks was carried out, the search for which in Russia was carried out by expeditions, also organized on his initiative. V. I. Vernadsky was the first Russian scientist who realized the importance of the discovery of radioactivity: "...For us it is not completely indifferent at all how radioactive minerals of Russia will be studied... Now, when mankind is entering a new age of radiant - atomic energy, we, and not others, should know, should find out what the soil of our native country holds in this respect". [ 5 ] [ 17 ]
In 1909 V. I. Vernadsky headed the research of radioactivity phenomena in Russia, under his chairmanship the Radium Commission was organized - all the works were united under the auspices of the Academy of Sciences, the Radiological Laboratory was founded, since 1914 the publication of the "Proceedings of the Radium Expedition of the Academy of Sciences" was started. In the mentioned speech V. I. Vernadsky notes the specific features of the new direction of scientific research: "This discovery has produced a huge revolution in the scientific outlook, caused the creation of a new science, different from physics and chemistry - the doctrine of radioactivity, put before life and technology practical tasks of a completely new kind...". [ 18 ]
In 1915, V. I. Vernadsky attracted V. G. Khlopin to work in the Radiological Laboratory. V. G. Khlopin was destined to become the first, and for many years - the leading specialist in the new discipline. But research in the field of radioactivity, study of new radioactive elements already discovered in Russia at that time was still in the state of initial organizational period - there were no domestic radium preparations for laboratory experiments; however, deposits of minerals and ores - raw materials for consistent development of scientific work in this direction, systematic study of radioactive minerals - were already known. The leading experts of the profile - Professors K. A. Nenadkevich and A. E. Fersman [ 5 ] [ 6 ] - were invited to participate in the present work.
In the context of mastering the fundamental areas of activity, which for V.G. Khlopin became his life's work, he develops research of scientific and applied aspects, including methods of geochemistry of radioactive elements and noble gases, analytical chemistry and thermodynamics; at the same time, the scientist develops an independent direction, which gave the preconditions for the formation of a scientific school. By the early 1920s, four main lines had emerged, which in turn led to the establishment of an independent school:
1. radium technology ;
2. chemistry of radioelements and applied radiochemistry ;
3. geochemistry of radioelements and noble gases ;
4. analytical chemistry . [ 4 ]
In 1917, the purely scientific interest in the study of radium was replaced by the practical need to use it for military purposes - the military department and defense organizations received information that radium was used for the production of light compounds. The necessity of radium extraction from domestic raw materials became urgent. A large batch of radium-containing ore from the Tyuya-Muyun deposit was stored in the warehouse of a private commercial firm "Fergana Society for Rare Metals Mining". This organization, due to the lack of specialists-radiochemists in Russia, was preparing the raw material for shipment to Germany for technological extraction of the final product from it, but the war and then the February Revolution of 1917 prevented this. [ 6 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ]
The Congress for the Technical Defense of the State in October 1917 decided to organize a special radium plant under the direct control of the Academy of Sciences, but the October Socialist Revolution again removed this issue from the queue. In January 1918 V. G. Khlopin published an article "A Few Words on the Application of Radioactive Elements in Military Technology and on the Possible Future of the Radium Industry in Russia", [ 23 ] in which he characterized the importance and prospective use of radium for military-strategic purposes. In the spring of the same year, the Presidium of the All-Russian Council of National Economy (RCNE) decided to sequester radioactive raw materials belonging to the "Fergana Society"; in April, the Chemical Department of the RCNE, headed by Prof. L. Ya. Karpov, entrusted the Academy of Sciences with the mission of organizing a plant for radium extraction from domestic uranium-vanadium ores and ensuring scientific control over production; at a meeting of specialists convened on 12 April by the Commission for the Study of the Natural Productive Forces of Russia (NPFR), headed by N. S. Kovalev. С. Kurnakov, V. G. Khlopin and L. I. Bogoyavlensky it was reported on the results of the work undertaken to obtain radium from the available raw materials; in July 1918 a special Commission, the Technical Council or later the Board for the organization of a radium plant at the Academy of Sciences was elected, which decided to organize a research laboratory, a special Radium Department (under the Commission) headed by V. I. Vernadsky was established under the chairmanship of A. E. Fersman, senior mineralogist of the Academy of Sciences, professor of the Higher Women's Courses. The secretary of the department, a specialist of the Radium Laboratory of the Academy, an assistant of the Department of General Chemistry of the Petrograd University, 28-year-old V. G. Khlopin, was appointed its commissioner for the organization of the radium plant. His thorough theoretical training and mastery of the methods of fine chemical analysis, his ability to solve practical problems effectively, and his experience in expeditions fully justified his involvement in such a responsible business. L. N. Bogoyavlensky, [ 5 ] [ 6 ] [ 19 ] a specialist on this subject, was invited as the head of the plant.
In 1918, all radioactive residues that were in Petrograd were evacuated inland - first to the Berezniki soda plant in Perm province, [ 25 ] and in May 1920, already by the new plant manager I. Ya. Bashilov, - to the Bondyuzhsky chemical plant of Khimosnov (now Khimzavod named after L. Y. Karpov in Mendeleevsk), [ 26 ] where only in the fall of 1920 it became possible to put into operation a temporary pilot plant for radium extraction. [ 19 ] [ 22 ]
V. G. Khlopin developed a method of mechanical enrichment to improve the quality of raw barium-radium sulfates rich in silica (together with engineer S. P. Alexandrov). Later, the scientist transformed the Curie-Debierne method of conversion of sulfates into carbonates under the condition of saturation of sulfates with silica - through the combination of soda with caustic soda (together with P. A. Volkov). [ 4 ]
On the basis of theoretical assumptions, V. G. Khlopin proposed several methods of fractional crystallization of barium-radium salts, excluding evaporation of solutions - by increasing the concentration of the same ion in the cold: fractional precipitation of chlorides with hydrochloric acid (1921), fractional precipitation of bromides (together with M. A. Pasvik, 1923), fractional precipitation of nitrates (with P. I. Tolmachev, with A. P. Ratner, 1924-1930). A. Pasvik, 1923), fractional precipitation of nitrates (with P. I. Tolmachev, with A. P. Ratner, 1924-1930), fractional precipitation of chromates (M. S. Merkulova), fractional precipitation of chlorides with zinc chloride (I. Y. Bashilov and Y. S. Vilnyansky, 1926). [ 4 ]
In 1924, V. G. Khlopin created a general theory of the fractional crystallization process, which greatly facilitated the calculation of the technological process in general and the development of the required apparatus for its implementation in particular. A number of versions of the conventional crystallization scheme were hereby based on calculations used in plant practice. Later this theory was applied and developed in the All-Russian Research Institute of Chemical Reagents and Particularly Pure Chemicals for obtaining chemically pure substances by recrystallization. [ 4 ] [ 6 ]
In this field, V. G. Khlopin and his colleagues and students (M. S. Merkulova, V. I. Grebenshchikov and others) developed a methodology for studying the process of isomorphous coprecipitation of microcomponents and ways to achieve equilibrium in the solid phase-solution system, - the influence of many factors on this process was established and the hypothesis of V. G. Khlopin (1924) about the subordination of the process of fractional crystallization to the law of substance distribution between two immiscible phases was proved (Khlopin's law). The possibility of using the method of isomorphic co-crystallization not only for the isolation of radioactive elements, but also for the study of their state in liquid and solid phases - for determining their valence was shown. V. G. Khlopin and A. G. Samartseva established the existence of compounds of divalent and hexavalent polonium by this method. The process of adsorption of crystalline precipitates by the surface was also studied, - the distribution between the gas phase and crystalline precipitate, as well as between the salt melt and the solid phase. [ 6 ]
Thus, in this section, V. G. Khlopin's studies address the following key issues: 1. conditions for achieving true (thermodynamic) microcomponent equilibrium between the crystalline solid phase and solution; 2. the use of radioelements as indicators in determining the mechanism of isomorphic substitution of dissociated ions; 3. application of general laws of isomorphous substitution for development of a method for fixation of chemical compounds present in extremely small proportions and unstable in the solid phase, establishment of their valence and chemical type, - for revealing new chemical equilibria both in the solid phase and in solution; 4. conditions of adsorption equilibrium between solid crystalline phase and solution. [ 4 ]
It has been rigorously experimentally established that: a) When a true (thermodynamic) equilibrium is reached between a crystalline solid phase (electrolyte) and a solution, the microcomponent present in the solution and isomorphic with the solid phase is distributed between the two immiscible solvents according to the Berthelot - Nernst law and at that in all known cases in its simple form: Ск/Ср=К or
where x is the amount of microcomponent transferred into crystals, a is the total amount of microcomponent, y and b are the corresponding values for macrocomponent. b) The mechanism responsible for achieving true equilibrium between the crystalline phase and the solution is reduced to the process of multiple recrystallization of the solid phase, that replaces in the considered case practically absent under ordinary conditions the diffusion process in the solid state. Recrystallization at submicroscopic sizes of crystals proceeds extremely fast, thus in crystallization from supersaturated solutions recrystallization and establishment of equilibrium are finished at the stage when crystallites are small enough.
c) In the case of slow crystallization not from supersaturated solutions, but from saturated ones, in particular, due to slow evaporation, the true equilibrium between crystals and solution is not observed, and the distribution of the microcomponent between the solid phase and solution proceeds in this case according to the logarithmic law of Goskins and Derner, developed on the basis of the idea of continuous ion exchange between the faces of the growing crystal and the solution
Here, as above: a is the total amount of microcomponent, x is the amount of microcomponent transferred to the solid phase, b is the total amount of macrocomponent, y is the amount of macrocomponent transferred to the solid phase. d) An abrupt change in the value of D with a change in temperature or in the composition of the liquid phase is an indicator of the occurrence of a new chemical equilibrium in solution or in the solid phase.
The case of distribution of the microcomponent between the crystalline solid phase and the solution (according to the Berthelot - Nernst or Goskins and Derner law) can serve as evidence for the formation between the microcomponent and the anion or cation of the solid phase of compounds crystallizing isomorphically with the solid phase.
Radioactive elements ( Ra and RaD ) were used by V. G. Khlopin and B. A. Nikitin as indicators in determining the nature of a new kind of mixed Gramm crystals. These studies showed a fundamental difference between true mixed crystals in the spirit of Eilhard Mitscherlich , when the substitution of one component for another is expressed in the form of ion for ion, or atom for atom, molecule for molecule, and mixed crystals of a new kind, in which such a simple substitution is impossible, and proceeds by means of very small sizes of the ready sections of the crystal lattice of each component. Scientists have shown that mixed crystals of a new kind fundamentally differ from true mixed crystals by the presence of a low miscibility limit - they are not formed at low concentration of one of the components at all. In this case, they are similar to anomalous mixed crystals (as shown experimentally by V. G. Khlopin and M. A. Tolstaya), and relate to the latter approximately as a colloidal solution with suspension. These works (on the structure and properties of mixed crystals of a new kind and anomalous mixed crystals) led V. G. Khlopin to the idea of the need to classify isomorphic bodies not by considering the structure of isomorphic mixtures in static equilibrium (as it was done, for example, by V. G. Goldschmidt and his school), but according to the methods of substitution of components - taking into account the dynamics of the formation of an isomorphic mixture. In this case, all isomorphic bodies are strictly divided into two groups according to the method of substitution:
(a) Isomorphic compounds in the spirit of E. Mitscherlich, truly isomorphic. Substitution in the formation of mixed crystals by such compounds occurs according to the first principle: ion on ion, etc. The above distribution laws apply to such crystals. Such compounds have similar chemical composition and molecular structure.
(b) All other isomorphic compounds, when the formation of mixed crystals is conditioned by the second principle: substitution of sites from the unit cell or close to them (mixed crystals of a new kind or isomorphic of the 2nd kind according to W.G. Goldschmidt), up to microscopic - anomalous mixed crystals such as FeCl 2 — NH 4 Cl, Ba(NO 3 ) 2 , Pb(NO 2 ) 2 , methylene blue K 2 SO 4 - Ponsorot, etc., showing heterogeneity).
3.Thanks to the works discussed in the previous two paragraphs, V. G. Khlopin was able to present in a new form the law of E. Mitscherlich, which makes it possible to judge the composition and molecular structure of unknown compounds on the basis of their formation of isomorphous mixtures with compounds whose composition and molecular structure are known. V.G. Khlopin proposed the method of isomorphous co-crystallization from solutions for fixation of weightless and unstable chemical compounds and determination of their composition. The method made it possible to discover and determine the composition of individual compounds of divalent and hexavalent polonium (V. G. Khlopin and A. G. Samartseva).
4. Studying adsorption of isomorphous ions on the surface of crystalline precipitates, V. G. Khlopin showed that adsorption equilibrium is established in 20–30 minutes; adsorption of isomorphous ions does not depend on the charge of the adsorber surface when its solubility does not change. Correctly reproducible results of adsorption study and full reversibility of this process are achieved only if the adsorber surface remains unchanged throughout the experiment - if the adsorber solubility remains unchanged; in case of changes in the liquid phase composition or under other additional conditions, when the adsorber solubility changes, adsorption acquires a more complex character, which is accompanied by co-crystallization distorting the results. Studying the adsorption kinetics, a similar phenomenon was encountered by L. Imre. V. G. Khlopin gave a formula for determining the surface of crystalline precipitates by adsorption of an isomorphic ion on them and experimentally confirmed its applicability (V. G. Khlopin, M. S. Merkulova).
In this field, the following directions were developed in V. G. Khlopin's works:
1. radioelements migration, in particular - relatively short-lived in the Earth's crust;
2. study of radium-mesothorium containing waters;
3. Determination of geologic age on the basis of radioactive data;
4. distribution of helium and argon in natural gases of the country;
5. effects of natural waters in geochemistry of noble gases;
6. distribution of boron in natural waters.
The scientist was the first to draw attention to the special importance of studying the migration of relatively short-lived radioelements in the Earth's crust for solving general geological and geochemical problems (1926). V. G. Khlopin pointed out a number of questions of these disciplines, which imply solution by the proposed methods: determination of sequence in geological and geochemical processes, determination of absolute age of relatively young and very young geological formations, and a number of other thematic areas. Migrations of uranium and radium were subjected to experimental study.
Extensive studies relating to the establishment of the presence of radium, uranium, and decay products of the thorium series in natural brines of the Soviet Union were carried out under the direction of V. G. Khlopin; numerous expeditions revealed a new form of accumulation in nature of radium and its isotopes in brine waters of the Na, Ca, and Cl types. The following of his students and colleagues participated in these studies: V. I. Baranov, L. V. Komlev, M. S. Merkulov, B. A. Nikitin, V. P. Savchenko, A. G. Samartseva, N. V. Tageev, and others.
These works concern, on the one hand, consideration of the basics of the method and analysis of the nature of errors, and, on the other hand, experimental determination of the age of uranites from different pegmatite veins both by the uranium/lead ratio and by Lan's oxygen method, which was developed and refined in the works of V.G. Khlopin. The scientist supervised research in this direction in the Radium Institute - on helium and lead methods, which gave the determination of the geologic age of some formations. The work (with E. K. Gerling and E. M. Ioffe) on helium migration from minerals and rocks and the influence of the gas phase on this process should be attributed to this cycle.
V. G. Khlopin began to study the distribution of helium in freely emitting gases of the country in 1922-1923. In 1924, he and A. I. Lakashuk discovered helium in the gases of the Novouzensky district of Saratov province; and in the period from 1924 to 1936, V. G. Khlopin and his students (E. K. Gerling, G. M. Ermolina, B. A. Nikitin, I. E. Starik, P. I. Tolmachev, and others) analyzed many samples of natural gases and created a distribution map based on the data. For the first time a new type of gas jets in the Kokand area, called "air jets" and characteristic of wide mountain basins (1936), was identified.
The works of the present direction were a direct consequence of the previous section, on the basis of which V.G. Khlopin came to the concept of continuous gas exchange between inner and outer gas atmospheres, about the role of natural waters, in a particular case - in the exchange of noble gases (excluding helium) between outer air and underground atmospheres. In accordance with these ideas in underground gas atmospheres there is a gradual enrichment of argon, krypton and xenon, - depletion of neon in relation to their content in the air. Relation
in underground atmospheres is greater than in air. It has been found that gases dissolved in the lower layers of deep natural reservoirs are sharply enriched with heavy noble gases.
The beginning of this direction of geochemistry was the work on boron-acid springs in northwestern Persia and Transcaucasia; later these studies were extended to other areas of the USSR. It was found that boron is a typical element in the waters of oil-bearing areas, enriched in them. V.G. Khlopin also for the first time noted the need for prospecting boron-acid compounds in the Embinsky and Gurievsky counties of the Ural region, where much later the Inderskoye field was discovered.
V. G. Khlopin’s work in this area concerns gas, volumetric, gravimetric and colorimetric analysis.
Gas analysis. V. G. Khlopin developed instruments for rapid assessment of the amount of helium and neon in gas mixtures (V. G. Khlopin, E. K. Gerling, 1932). These devices have simplified the analysis of noble gases so much that they have made it possible to include it in the general method of gas analysis.
Volumetric analysis. For the first time in the USSR, V. G. Khlopin introduced the method of differential reduction and differential oxidation with the simultaneous determination of several cations in a mixture (1922) and experimentally mastered the simultaneous determination of vanadium, iron and uranium - volumetric methods for the determination of vanadium and uranium were proposed.
Gravimetric analysis. V. G. Khlopin developed a quantitative method for separating tetravalent uranium in the form UF 4 NH 4 F 1 / 2 H 2 O from hexavalent uranium and trivalent and divalent iron.
Colorimetric analysis. Scientists have proposed a method for determining small amounts of iridium in the presence of platinum.
Under the leadership of V.G. Khlopin, several methods of analysis were also developed: a volumetric method for determining small amounts of boron, a volumetric method for determining SO 4 2 − {\displaystyle {\ce {SO4^2-}}} and Mg 2 + {\displaystyle {\ce {Mg^2+}}} , gravimetric methods for determining uranium, a colorimetric method for determining fluorine, and others.
In the process of studying natural radioactivity - studying the radiation of radioactive elements and radioactive transformations, new natural radioactive elements were discovered, systematized in radioactive groups - uranium and thorium, which include the third, so-called actinium family - actinides (this name was proposed by S. A. Shchukarev). F.Soddy's discovery of the law of radioactive displacements made it possible to assume that the final stable decay products of elements of all three families are three isotopes of the same element - lead .
The Bohr model of the atom is based on the study of natural radioactivity, which showed the complexity of the structure of the atom, the decay of which produces atoms of other elements, which is accompanied by three types of radiation: α , β и γ .
The neutron-proton theory of the structure of the atomic nucleus owes its origin to the discovery of new elementary particles that make up the nucleus: the neutron ( 1 0 n) and the proton ( 1 1 p), which became possible by the artificial splitting of the atom under the influence of α-particles (1919): 14 7 N+ 4 2 He→ 17 8 O+ 1 1 H, accompanied by the release of a proton (soon experiments were carried out with a number of other light elements). [ 27 ]
Further fundamental research in this area showed that in light elements the number of neutrons in the nucleus is equal to the number of protons; and as we move to heavy elements, neutrons begin to dominate over protons, and the nuclei become unstable - they become radioactive.
As part of the atomic project, he was a member of the technical council [ 28 ] and was responsible for the activities of the radium institute. Through the efforts of V.G. Khlopin and the First Secretary of the Leningrad Regional Committee and City Committee of the All-Union Communist Party of Bolsheviks, Alexey Kuznetsov , the Radium Institute received additional premises. The decision to allocate space was made by the Special Committee in November 1945, carried out by the chairmen of the Operations Bureau of the Council of People's Commissars of the RSFSR A. N. Kosygin and the representative of the State Planning Committee in the Special Committee N. A. Borisov.
After graduating from St. Petersburg University, V. G. Khlopin was left at the department of Professor L. A. Chugaev, but while still a student, in 1911 he conducted a workshop on the chemical methods of sanitary analyzes with doctors at the St. Petersburg Clinical Institute, and continued this course of practical training in 1912 and 1913.
From 1917 to 1924, V. G. Khlopin served as an assistant in the department of general chemistry at the university, and from 1924, as an assistant professor, he began teaching a special course on radioactivity and the chemistry of radioelements - the first in the USSR; since brief and incomplete data and summaries existed only in foreign literature, this course was completely developed by V. G. Khlopin, who taught it until 1930, and resumed in 1934 as a professor, teaching it until 1935. In the spring of 1945, the scientist organized and headed the department of radiochemistry at Leningrad University.
Developed by V. G. Khlopin in collaboration with B. A. Nikitin and A. P. Ratner, a course of lectures on radiochemistry formed the basis of an extensive monograph on the chemistry of radioactive substances.
V. G. Khlopin took an active part in the work of the Russian Physical-Chemical Society, and after the latter was transformed into the All-Union Chemical Society, he was a member of the Council of the Leningrad branch of the organization, and later was its chairman.
At the Academy of Sciences, V. G. Khlopin was a member of the Analytical Commission, the Commission on Isotopes, and the Commission for the Development of the Scientific Heritage of D. I. Mendeleev . From 1941 to 1945, V. G. Khlopin, as Deputy Academician-Secretary, did a lot of work in the Department of Chemical Sciences of the USSR Academy of Sciences. During the Eastern Front ( World War II ), V. G. Khlopin served as deputy chairman of the Commission for the Mobilization of Resources of the Volga and Kama Region and chairman of its chemical section.
For many years he was a member of the Editorial Council of the Chemical-Technical Publishing House (Khimteoret). The scientist was the executive editor of the journal Uspekhi Khimii and was on the editorial boards of the journals: “Reports of the USSR Academy of Sciences”, “Izvestia of the USSR Academy of Sciences (Department of Chemical Sciences)”, “Journal of General Chemistry” and “Journal of Physical Chemistry”.
Vitaly Grigorievich Khlopin trained students in all the most important areas of scientific activity, many of whom became not only independent scientific researchers, but also the creators of their own scientific directions and schools.
The following were named after V. G. Khlopin:
In the 1950s, a memorial plaque was installed on the house at 61 Lesnoy Avenue with the text: “The outstanding Russian chemist Vitaly Grigorievich Khlopin lived in this house from 1945 to 1950.”
|
https://en.wikipedia.org/wiki/Vitaly_Khlopin
|
Vitamin A receptor , Stimulated by retinoic acid 6 or STRA6 protein was originally discovered as a transmembrane cell-surface receptor for retinol-binding protein . [ 1 ] [ 2 ] [ 3 ] STRA6 is unique as it functions both as a membrane transporter and a cell surface receptor , particularly as a cytokine receptor . In fact, STRA6 may be the first of a whole new class of proteins that might be known as "cytokine signaling transporters." [ 4 ] STRA6 is primarily known as the receptor for retinol binding protein and for its relevance in the transport of retinol to specific sites such as the eye (Vitamin A). [ 5 ] It does this through the removal of retinol (ROH) from the holo-Retinol Binding Protein (RBP) and transports it into the cell to be metabolized into retinoids [ 6 ] and/or kept as a retinylester. [ 7 ] As a receptor, after holo-RBP is bound, STRA6 activates the JAK/STAT pathway, resulting in the activation of transcription factor, STAT5 . These two functions—retinol transporter and cytokine receptor—while using different pathways, are processes that depend on each other. [ 8 ]
In the first step, holo-retinol binding protein (holo-RBP; simply means RBP bound to retinol, i.e. the RBP-ROH complex) binds to the extracellular portion of STRA6. This facilitates the release of retinol through the transporter. ROH is then transferred to cellular retinol binding protein 1 (CRBP1), an intracellular acceptor of retinol that attaches to the CRBP Binding Loop (or CBL) on STRA6. This transport of ROH, in turn, activates JAK2 , thereby phosphorylating STRA6 at the Y643 (tyrosine) residue. [ 8 ] This phosphorylation enables the extension of the CBL further into the cell. Holo-CRBP-I, leaves the CBL and is replaced by apo-CRBP-I (unbound). Holo-CRBP-I will continue to the Endoplasmic Reticulum (ER) where lecithin retinol acyltransferase (LRAT) is bound. ROH is released to LRAT which will convert retinol into retinylesters. [ 7 ] Following the release of holo-CRBP-I from intercellular STRA6, STAT5 is recruited to STRA6 phosphorylated Y643 region where it is then phosphorylated by JAK2. This phosphorylation activates STAT5 which then makes its way to the nucleus to induce expression of target genes including suppressor of cytokine signaling 3 ( SOCS3 ), a strong inhibitor of insulin signaling. [ 7 ]
Research has demonstrated that overexpression of CRBP-I increases the ability of RBP-ROH complex to phosphorylate STRA6 and, later, JAK2 and STAT5. Suppressing CRBP-I, on the other hand, led to decreased ability of RBP-ROH complex to phosphorylate STRA6 and signaling components. Similarly, reducing the expression of LRAT also decreased the ability of RBP-ROH complex to phosphorylate JAK2 and STAT5. [ 8 ] Therefore, both CRBP-I and LRAT are necessary for the STRA6 signaling cascade upon the binding and transport of retinol. JAK2 is also conversely responsible for the activation of STRA6, after which apo-CRBP-I is recruited to the intercellular CBL of STRA6 and vitamin A might be transferred by the receptor to CRBP-I. [ 8 ] Thus, both STRA6 signaling and STRA6 transport of vitamin A are dependent upon each other. Uptake of retinol is required for STRA6 signaling and JAK2 activation of STRA6 is necessary for retinol uptake.
STRA6 can be found at high levels in various tissues including: the choroid plexus, the brain microvascular, testis, the spleen, kidney, eye, the placenta, and the female reproductive tract. However, it is surprisingly not found in liver tissue where Vitamin A (retinol) is primarily stored. [ 9 ] [ 10 ] Because of its importance in Vitamin A transport, STRA6 mutations are more commonly associated with problems with eye such as a reduction in retinal thickness and shortening of the inner and outer segments of rod photoreceptors. Therefore, as might be expected, STRA6 mutations result in a number of different abnormalities of the eye such as Microphthalmia , Anophthalmia , and Coloboma . [ 10 ] [ 11 ]
However, STRA6 is clearly vital for more than just eye development as it is expressed in many different tissues detailed above. Other disorders that result from STRA6 mutations include pulmonary dysgenesis, cardiac malformations, and mental retardation. In fact, research has shown that homozygous mutations in human STRA6 gene can lead to Matthew-Wood syndrome, which is a combination of all the mentioned disorders. In this respect, STRA6 mutations can be particularly fatal during the embryonic stage. [ 9 ] [ 10 ]
STRA6 has also been associated with facilitating insulin resistance. This is because STRA6 signaling results in activation of transcription factor STAT5 target genes. One of these target genes is a suppressor of cytokine signaling 3 (SOCS3) which is a strong inhibitor of insulin signaling. As a result, STRA6 signaling suppresses the response to insulin by inhibiting the phosphorylation of the insulin receptor , IR, by an influx of insulin. [ 8 ] In other words, increased levels of the RBP in obese animals (which will increase STRA6 activity) can facilitate insulin resistance. Due to this close relationship between STRA6 and insulin resistance, it has been demonstrated that single nucleotide polymorphisms in STRA6 are associated with Type 2 Diabetes. [ 8 ]
|
https://en.wikipedia.org/wiki/Vitamin_A_receptor
|
The total synthesis of the complex biomolecule vitamin B 12 was accomplished in two different approaches by the collaborating research groups of Robert Burns Woodward at Harvard [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] and Albert Eschenmoser at ETH [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] in 1972. The accomplishment required the effort of no less than 91 postdoctoral researchers (Harvard: 77, ETH: 14) [ 13 ] : 9-10 [ 14 ] , and 12 Ph.D. students (at ETH [ 12 ] : 1420 ) from 19 different nations over a period of almost 12 years. [ 5 ] : 1:14:00-1:14:32,1:15:50-1:19:35 [ 14 ] : 17-18 The synthesis project [ 15 ] induced and involved a major change of paradigm [ 16 ] [ 17 ] : 37 [ 18 ] : 1488 in the field of natural product synthesis . [ 19 ] [ 20 ] [ 21 ]
Vitamin B 12 , C 63 H 88 CoN 14 O 14 P, is the most complex of all known vitamins . Its chemical structure had been determined by x-ray crystal structure analysis in 1956 by the research group of Dorothy Hodgkin ( Oxford University ) in collaboration with Kenneth N. Trueblood at UCLA and John G. White at Princeton University . [ 24 ] [ 25 ] Core of the molecule is the corrin structure, a nitrogenous tetradentate ligand system. [ note 1 ] This is biogenetically related to porphyrins and chlorophylls , yet differs from them in important respects: the carbon skeleton lacks one of the four meso carbons between the five-membered rings, two rings (A and D, fig. 1) being directly connected by a carbon-carbon single bond . The corrin chromophore system is thus non-cyclic and expands over three meso positions only, incorporating three vinylogous amidine units. Lined up at the periphery of the macrocyclic ring are eight methyl groups and four propionic and three acetic acid side chains. Nine carbon atoms on the corrin periphery are chirogenic centers . The tetradentate, monobasic corrin ligand is equatorially coordinated with a trivalent cobalt ion which bears two additional axial ligands . [ note 2 ]
Several natural variants of the B 12 structure exist that differ in these axial ligands. In the vitamin itself, the cobalt bears a cyano group on the top side of the corrin plane ( cyanocobalamin ), and a nucleotide loop on the other. This loop is connected on its other end to the peripheral propionic amide group at ring D and consists of structural elements derived from aminopropanol , phosphate , ribose , and 5,6-dimethylbenzimidazole . One of the nitrogen atoms of the imidazole ring is axially coordinated to the cobalt, the nucleotide loop thus forming a nineteen-membered ring. All side chain carboxyl groups are amides.
Cobyric acid, one of the natural derivatives of vitamin B 12 , [ 26 ] lacks the nucleotide loop; depending on the nature of the two axial ligands, it displays instead its propionic acid function at ring D as carboxylate (as shown in fig. 1), or carboxylic acid (with two cyanide ligands at cobalt).
The structure of vitamin B 12 was the first low-molecular weight natural product determined by x-ray analysis rather than by chemical degradation. Thus, while the structure of this novel type of complex biomolecule was established, its chemistry remained essentially unknown; exploration of this chemistry became one of the tasks of the vitamin's chemical synthesis . [ 12 ] : 1411 [ 18 ] : 1488-1489 [ 27 ] : 275 In the 1960s, synthesis of such an exceptionally complex and unique structure presented the major challenge at the frontier of research in organic natural product synthesis. [ 17 ] : 27-28 [ 1 ] : 519-521
Already in 1960, the research group of the biochemist Konrad Bernhauer [ de ] in Stuttgart had reconstituted vitamin B 12 from one of its naturally occurring derivatives, cobyric acid, [ 26 ] by stepwise construction of the vitamin's nucleotide loop. [ note 4 ] This work amounted to a partial synthesis of vitamin B 12 from a natural product containing all the structural elements of vitamin B 12 except the nucleotide loop. Therefore, cobyric acid was chosen as the target molecule for a total synthesis of vitamin B 12 . [ 6 ] : 183-184 [ 1 ] : 521 [ 8 ] : 367-368
Collaborative work [ 3 ] : 1456 [ 17 ] [ 30 ] : 302-313 of research groups at Harvard and at ETH resulted in two cobyric acid syntheses, both concomitantly accomplished in 1972, [ 31 ] [ 32 ] one at Harvard [ 3 ] , and the other at ETH. [ 10 ] [ 11 ] [ 12 ] A "competitive collaboration" [ 17 ] : 30 [ 33 ] : 626 of that size, involving 103 graduate students and postdoctoral researchers for a total almost 177 person-years, [ 13 ] : 9-10 is so far unique in the history of organic synthesis . [ 4 ] : 0:36:25-0:37:37 The two syntheses are intricately intertwined chemically, [ 18 ] : 1571 yet they differ basically in the way the central macrocyclic corrin ligand system is constructed. Both strategies are patterned after two model corrin syntheses developed at ETH. [ 8 ] [ 18 ] : 1496,1499 [ 34 ] : 71-72 The first, published in 1964, [ 28 ] achieved the construction of the corrin chromophore by combining an A-D-component with a B-C-component via iminoester / enamine -C,C- condensations , the final corrin-ring closure being attained between rings A and B. [ 35 ] The second model synthesis, published 1969, [ 36 ] explored a novel photochemical cycloisomerization process to create the direct A/D-ring junction as final corrin-ring closure between rings A and D. [ 37 ]
The A/B approach to the cobyric acid syntheses was collaboratively pursued and accomplished in 1972 at Harvard. It combined a bicyclic Harvard A-D-component with an ETH B-C-component , and closed the macrocyclic corrin ring between rings A and B. [ 3 ] : 145,176 [ 4 ] : 0:36:25-0:37:37 The A/D approach to the synthesis, accomplished at ETH and finished at the same time as the A/B approach also in 1972, successively adds rings D and A to the B-C-component of the A/B approach and attains the corrin ring closure between rings A and D . [ 10 ] [ 11 ] [ 12 ] The paths of the two syntheses met in a common corrinoid intermediate. [ 11 ] : 519 [ 38 ] : 172 The final steps from this intermediate to cobyric acid were carried out in the two laboratories again collaboratively, each group working with material prepared via their own approach, respectively. [ 17 ] : 33 [ 18 ] : 1567
Woodward and Eschenmoser embarked on the project of a chemical synthesis of vitamin B 12 independently from each other. The ETH group started with a model study on how to synthesize a corrin ligand system in December 1959. [ 18 ] : 1501 In August 1961, [ 17 ] : 29 [ 13 ] : 7 the Harvard group began attacking the buildup of the B 12 structure directly by aiming at the most complex part of the B 12 molecule, the "western half" [ 1 ] : 539 that contains the direct junction between rings A and D (the A-D-component). Already in October 1960, [ 17 ] : 29 [ 13 ] : 7 [ 39 ] : 67 the ETH group had commenced the synthesis of a ring-B precursor of vitamin B 12 .
At the beginning, [ 40 ] progress at Harvard was rapid, until an unexpected stereochemical course of a central ring formation step interrupted the project. [ 41 ] [ 17 ] : 29 Woodward's recognition of the stereochemical enigma that came to light by the irritating behavior of one of his carefully planned synthetic steps became, according to his own writings, [ 41 ] part of the developments that led to the orbital symmetry rules .
After 1965, the Harvard group continued work towards an A-D-component along a modified plan, using (−)-camphor [ 42 ] as the source of ring D. [ 17 ] : 29 [ 18 ] : 1556
By 1964, the ETH group had accomplished the first corrin model synthesis, [ 28 ] [ 27 ] : 275 and also the preparation of a ring-B precursor as part of a construction of the B 12 molecule itself. [ 39 ] [ 43 ] Since independent progress of the two groups towards their long-term objective was so clearly complementary, Woodward and Eschenmoser decided in 1965 [ 18 ] : 1497 [ 17 ] : 30 to join forces and to pursue from then on the project of a B 12 synthesis collaboratively, planning to utilize the ligand construction (ring coupling of components) strategy of the ETH model system. [ 2 ] : 283 [ 18 ] : 1555-1574
By 1966, the ETH group had succeeded in synthesizing the B-C-component ("eastern half" [ 1 ] : 539 ) by coupling their ring-B precursor to the ring-C precursor. [ 18 ] : 1557 The latter had also been prepared at Harvard from (−)-camphor by a strategy conceived and used earlier by A. Pelter and J. W. Cornforth in 1961. [ note 6 ] At ETH, the synthesis of the B-C-component involved the implementation of the C,C-condensation reaction via sulfide contraction . This newly developed method turned out to provide a general solution to the problem of constructing the characteristic structural elements of the corrin chromophore, the vinylogous amidine systems bridging the four peripheral rings. [ 18 ] : 1499
Early in 1967, the Harvard group accomplished the synthesis of the model A-D-component, [ note 7 ] with the f-side chain undifferentiated, bearing a methyl ester function like all other side chains. [ 18 ] : 1557 From then on, the two groups systematically exchanged samples of their respective halves of the corrinoid target structure. [ 17 ] : 30-31 [ 18 ] : 1561 [ 32 ] : 17 By 1970, they had collaboratively connected Harvard's undifferentiated A-D-component with ETH's B-C-component, producing dicyano-cobalt(III)-5,15-bisnor-heptamethyl-cobyrinate 1 (fig. 4). [ note 2 ] The ETH group identified this totally synthetic corrinoid intermediate by direct comparison with a sample produced from natural vitamin B 12 . [ 2 ] : 301-303 [ 18 ] : 1563
In this advanced model study, reaction conditions for the demanding processes of the C/D-coupling and the A/B-cyclization via sulfide contraction method were established. Those for the C/D-coupling were successfully explored in both laboratories, the superior conditions were those found at Harvard, [ 2 ] : 290-292 [ 18 ] : 1562 while the method for the A/B-ring closure via an intramolecular version of the sulfide contraction [ 46 ] [ 36 ] [ 47 ] was developed at ETH. [ 2 ] : 297-299 [ 48 ] [ 18 ] : 1562-1564 Later it was shown at Harvard that the A/B-ring closure could also be achieved by thio -iminoester/enamine condensation. [ 2 ] : 299-300 [ 18 ] : 1564
By early 1971, the Harvard group had accomplished the synthesis of the final A-D-component, [ note 8 ] containing the f-side chain carboxyl function at ring D differentiated from all the carboxyl functions as a nitrile group (as shown in 2 in fig. 4 ; see also fig. 3 ). [ 3 ] : 153-157 The A/D-part of the B 12 structure incorporates the constitutionally and configurationally most intricate part of the vitamin molecule; its synthesis is regarded as the apotheosis of the Woodwardian art in natural product total synthesis. [ 11 ] : 519 [ 12 ] : 1413 [ 18 ] : 1564 [ 33 ] : 626
As far back as 1966, [ 37 ] : 1946 the ETH group had started to explore, once again in a model system, an alternative strategy of corrin synthesis in which the corrin ring would be closed between rings A and D. The project was inspired by the conceivable existence of a thus far unknown bond reorganization process. [ 37 ] : 1943-1946 This – if existing – would make possible the construction of cobyric acid from one single starting material. [ 6 ] : 185 [ 8 ] : 392,394-395 [ 33 ] Importantly, the hypothetical process, being interpreted as implying two sequential rearrangements, was recognized to be formally covered by the new reactivity classifications of sigmatropic rearrangements and electrocyclizations propounded by Woodward and Hoffmann in the context of their orbital symmetry rules ! [ 8 ] : 395-397,399 [ 11 ] : 521 [ 49 ] [ 18 ] : 1571-1572
By May 1968, [ 18 ] : 1555 the ETH group had demonstrated in a model study that the envisaged process, a photochemical A/D-seco-corrinate→corrinate cycloisomerization, does in fact exist. This process was first found to proceed with the Pd complex, but not at all with corresponding Ni(II)- or cobalt(III)-A/D-seco-corrinate complexes. [ 36 ] [ 50 ] : 21-22 It also went smoothly in complexes of metal ions such as zinc and other photochemically inert and loosely bound metal ions. [ 8 ] : 400-404 [ 12 ] : 1414 These, after ring closure, could easily be replaced by cobalt. [ 8 ] : 404 These discoveries opened the door to what eventually became the photochemical A/D approach of cobyric acid synthesis. [ 7 ] : 31 [ 9 ] : 72-74 [ 37 ] : 1948-1959
Starting in fall of 1969 [ 51 ] : 23 with the B-C-component of the A/B approach and a ring-D precursor prepared from the enantiomer of the starting material leading to the ring-B precursor, it took PhD student Walter Fuhrer [ 51 ] less than one and a half years [ 17 ] : 32 to translate the photochemical model corrin synthesis into a synthesis of dicyano-cobalt(III)-5,15-bisnor-a,b,d,e,g-pentamethyl-cobyrinate-c- N,N -dimethylamide-f-nitrile 2 ( fig. 4 ), the common corrinoid intermediate on the way to cobyric acid. At Harvard, the very same intermediate 2 was obtained around the same time by coupling the ring-D differentiated Harvard A-D-component (available in spring 1971 [ 18 ] : 1564 footnote 54a [ 3 ] : 153-157 ) with the ETH B-C-component, applying the condensation methods developed earlier using the undifferentiated A-D-component. [ 1 ] : 544-547 [ 2 ] : 285-300
Thus, in spring 1971, [ 33 ] : 634 two different routes to a common corrinoid intermediate 2 ( fig. 4 ) along the way to cobyric acid had become available, one requiring 62 chemical steps ( Harvard/ETH A/B approach ), the other 42 ( ETH A/D approach ). In both approaches, the four peripheral rings derived from enantiopure precursors possessing the correct sense of chiral , thereby circumventing major stereochemical problems in the buildup of the ligand system. [ 1 ] : 520-521 [ 7 ] : 12-13 [ 11 ] : 521-522 In the construction of the A/D-junction by the A/D-secocorrin→ corrin cycloisomerization, formation of two A/D- diastereomers had to be expected. Using cadmium(II) as the coordinating metal ion led to a very high diastereoselectivity [ 51 ] : 44-46 in favor of the natural A/D- trans -isomer. [ 12 ] : 1414-1415
Once the corrin structure was formed by either approach, the three C-H- chirogenic centers at the periphery adjacent to the chromophore system turned out to be prone to epimerizations with exceptional ease. [ 2 ] : 286 [ 9 ] : 88 [ 3 ] : 158 [ 4 ] : 1:53:33-1:54:08 [ 18 ] : 1567 This required a separation of diastereomers after most of the chemical steps in this advanced stage of the syntheses. It was fortunate indeed that, just around that time, the technique of high pressure liquid chromatography (HPLC) had been developed in analytical chemistry. [ 52 ] HPLC became an indispensable tool in both laboratories; [ 32 ] : 25 [ 9 ] : 88-89 [ 3 ] : 165 [ 4 ] : 0:01:52-0:02:00,2:09:04-2:09:32 its use in the B 12 project, pioneered by Jakob Schreiber at ETH, [ 53 ] was the earliest application of the technique in natural product synthesis. [ 18 ] : 1566-1567 [ 38 ] : 190 [ 54 ]
The final conversion of the common corrinoid intermediate 2 (fig. 6) from the two approaches into the target cobyric acid required the introduction of the two missing methyl groups at the meso positions of the corrin chromophore between rings A/B and C/D, as well as the conversion of all peripheral carboxyl functions into their amide form, except the critical carboxyl at the ring-D f-side chain (see fig. 6). These steps were collaboratively explored in strictly parallel fashion in both laboratories, the Harvard group using material produced via the A/B approach, the ETH group such prepared by the photochemical A/D approach. [ 17 ] : 33 [ 18 ] : 1567
The first decisive identification of a totally synthetic intermediate on the way to cobyric acid was carried out in February 1972 with a crystalline sample of totally synthetic dicyano-cobalt(III)-hexamethyl-cobyrinate-f-amide 3 (fig. 6 [ note 2 ] ), found to be identical in all data with a crystalline relay sample made from vitamin B 12 by methanolysis to cobester 4 , [ note 9 ] followed by partial ammonolysis and separation of the resulting mixture. [ 55 ] : 44-45,126-143 [ 3 ] : 170 [ 57 ] : 46-47 At the time when Woodward announced the "Total Synthesis of Vitamin B 12 " at the IUPAC conference in New Delhi in February 1972, [ 3 ] : 177 the totally synthetic sample of the f-amide was one that had been made at ETH by the photochemical A/D approach, [ 17 ] : 35 [ 58 ] : 148 [ 18 ] : 1569-1570 while the first sample of synthetic cobyric acid, identified with natural cobyric acid, had been obtained at Harvard by partial synthesis from B 12 -derived f-amide relay material. [ 57 ] : 46-47 [ 3 ] : 171-176 Thus, the Woodward/Eschenmoser achievement around that time had been, strictly speaking, two formal total syntheses of cobyric acid, as well as two formal total syntheses of the vitamin. [ 57 ] : 46-47 [ 18 ] : 1569-1570
In the later course of 1972, two crystalline epimers of totally synthetic dicyano-cobalt(III)-hexamethyl-cobyrinate-f- amide 3 , as well as two crystalline epimers of the totally synthetic f-nitrile, all prepared via both synthetic approaches, were stringently identified chromatographically and spectroscopically with the corresponding B 12 -derived substances. [ 18 ] : 1570-1571 [ 55 ] : 181-197,206-221 [ 5 ] : 0:21:13-0:46:32,0:51:45-0:52:49 [ 59 ] At Harvard, cobyric acid was then made also from totally synthetic f-amide 3 prepared via the A/B approach. [ 57 ] : 48-49 Finally, in 1976 at Harvard, [ 57 ] totally synthetic cobyric acid was converted into vitamin B 12 via the pathway pioneered by Konrad Bernhauer [ de ] . [ note 4 ]
Over the almost 12 years it took the two groups to reach their goal, both Woodward and Eschenmoser periodically reported on the stage of the collaborative project in lectures, some of them appearing in print. Woodward discussed the A/B approach in lectures published in 1968, [ 1 ] and 1971, [ 2 ] culminating in the announcement of the "Total Synthesis of Vitamin B 12 " in New Delhi in February 1972 [ 3 ] : 177 published in 1973. [ 3 ] This publication, and lectures with the same title Woodward delivered in the later part of the year 1972 [ 4 ] [ 5 ] are confined to the A/B approach of the synthesis and do not discuss the ETH A/D approach.
Eschenmoser had discussed the ETH contributions to the A/B approach in 1968 at the 22nd Robert A. Welch Foundation conference in Houston, [ 7 ] as well as in his 1969 RSC Centenary Lecture "Roads to Corrins", published in 1970. [ 8 ] He presented the ETH photochemical A/D approach to the B 12 synthesis at the 23rd IUPAC Congress in Boston in 1971. [ 9 ] The Zürich group announced the accomplishment of the synthesis of cobyric acid by the photochemical A/D-approach in two lectures delivered by PhD students Maag and Fuhrer at the Swiss Chemical Society Meeting in April 1972, [ 10 ] Eschenmoser presented a lecture "Total Synthesis of Vitamin B 12 : the Photochemical Route" for the first time as Wilson Baker Lecture at the University of Bristol, Bristol/UK on May 8, 1972. [ note 10 ]
As a joint full publication of the syntheses by the Harvard and ETH groups (announced in [ 10 ] and expected in [ 11 ] ) had not appeared by 1977, [ note 12 ] an article describing the final version of the photochemical A/D approach already accomplished in 1972 [ 10 ] [ 51 ] [ 55 ] [ 63 ] was published 1977 in Science. [ 12 ] [ 58 ] : 148 This article is an extended English translation of one that had already appeared 1974 in Naturwissenschaften, [ 11 ] based on a lecture given by Eschenmoser on January 21, 1974, at a meeting of the Zürcher Naturforschende Gesellschaft. Four decades later, in 2015, the same author finally published a series of six full papers describing the work of the ETH group on corrin synthesis. [ 64 ] [ 18 ] [ 65 ] [ 66 ] [ 35 ] [ 37 ] Part I of the series contains a chapter entitled "The Final Phase of the Harvard/ETH Collaboration on the Synthesis of Vitamin B 12 ", [ 18 ] : 1555-1574 in which the contributions of the ETH group to the collaborative work on the synthesis of vitamin B 12 between 1965 and 1972 are recorded.
The entire ETH work is documented in full experimental detail in publicly accessible Ph.D. theses, [ 39 ] [ 43 ] [ 60 ] [ 46 ] [ 61 ] [ 56 ] [ 62 ] [ 44 ] [ 48 ] [ 51 ] [ 55 ] [ 63 ] almost 1,900 pages, all in German. [ 67 ] Contributions of the 14 postdoctoral ETH researchers involved in the cobyric acid syntheses are mostly integrated in these theses. [ 12 ] : 1420 [ 64 ] : 1480 [ 13 ] : 12,38 The detailed experimental work at Harvard was documented in reports by the 77 postdoctoral researchers involved, with a total volume of more than 3,000 pages. [ 13 ] : 9,38 [ note 11 ]
Representative reviews of the two approaches to the chemical synthesis of vitamin B 12 have been published in detail by A. H. Jackson and K. M. Smith, [ 45 ] T. Goto, [ 68 ] R. V. Stevens, [ 38 ] K. C. Nicolaou & E. G. Sorensen, [ 15 ] [ 19 ] summarized by J. Mulzer & D. Riether, [ 69 ] and G. W. Craig, [ 14 ] [ 33 ] besides many other publications where these epochal syntheses are discussed. [ note 13 ]
In the A/B approach to cobyric acid, the Harvard A-D-component was coupled to the ETH B-C-component between rings D and C, and then closed to a corrin between rings A and B. Both these critical steps were accomplished by C,C-coupling via sulfide contraction , a new reaction type developed in the synthesis of the B-C-component at ETH. The A-D-component was synthesized at Harvard from a ring-A precursor (prepared from achiral starting materials), and a ring-D precursor prepared from (−)-camphor . A model A-D-component was used to explore the coupling conditions; this component differed from the A-D-component used in the final synthesis by having as the functional group at the ring-D f-side chain a methyl ester group (like all other side chains) instead of a nitrile group.
Synthesis of the ring-A precursor
Starting point for the synthesis of the ring-A precursor was methoxydimethyl-indol H-1 synthesized by condensation of the Schiff base from m-anisidine and acetoin . Reaction with the Grignard reagent of propargyl iodide gave racemic propargyl indolenine rac - H-2 ; ring closure to the aminoketone rac - H-3 was brought about by BF 3 and HgO in MeOH through intermediate rac - H-2a ( electrophilic addition) with the two methyl groups forced into a cis -relationship by kinetic as well as thermodynamic reasons. [ 1 ] : 521-522
Resolution of the racemic aminoketone into the two enantiomers . Reaction of rac - H-3 with (−)-ethyl isocyanate permitted isolation by crystallization of one of the two diastereomeric urea derivatives formed (the other does not crystallize). Treatment of racemic ketone rac - H-3 (or of mother liquors from the previous crystallization) with (+)-ethyl isocyanate gave the enantiomer of the first urea derivative. Pyrolytic decomposition of each of these urea derivatives led to enantiopure aminoketones, the desired (+)-H-3 , and (−)-H-3 . [ 1 ] : 524-525 The "unnatural" (−)-enantiomer (−)-H-3 was used to determine the absolute configuration ; in various later steps, (−)-H-3 and enantio-intermediates derived from it were used as model compounds in exploratory experiments. [ 38 ] : 173 Woodward wrote regarding the unnatural enantiomer "our experience has been such that this is just about the only kind of model study which we regard as wholly reliable". [ 1 ] : 529
Determination of the absolute configuration of ring-A precursor (+)-H-3. For this determination, the levo-rotatory ("unnatural") enantiomer of aminoketone (−)-H-3 was used in order to save precious material: Acylation of the amino group of (−)-H-3 with chloroacetyl chloride , followed by treatment of the product H-3a with potassium t -butoxide in t -butanol , afforded tetracyclic keto-lactame H-3b . Its keto carbonyl was converted to a methylene group by desulfurization of the dithioketal of H-3b with Raney nickel to give lactam H-3c . Destruction of the aromatic ring by ozonolysis , involving the loss of a carboxyl function by spontaneous decarboxylation , led to bicyclic lactam-carboxylic acid H-3d . This material was identified with a product H-3h derived from (+)-camphor , possessing the same constitution and the absolute configuration as shown in formula H-3d . [ 1 ] : 525-526
The material for this identification of H-3d was synthesized from (+)-camphor as follows: cis -isoketopinic acid H-3e , obtained from (+)-camphor by an established route described in the literature, [ 70 ] was converted via the corresponding chloride , azide , and isocyanate to methyl- urethane H-3f . When treated with potassium t -butoxide in t -butanol and subsequently with KOH, H-3f was converted to H-3h , clearly by way of the intermediate H-3g . The identity of the two samples of H-3d and H-3h obtained by the two routes described, established the absolute configuration of (+)-H-3 , the enantiomer of the ring-A precursor. [ 1 ] : 525-526
Synthesis of the ring-D precursor from (-)-camphor
(−)-Camphor was nitrosated in the α-position of the carbonyl group to give oxime H-4 , Beckmann cleavage afforded via the corresponding nitrile the amide H-5 . Hofmann degradation via an intermediary amine and its ring closure led to lactam H-6 . Conversion of its N -nitroso derivative H-7 gave diazo compound H-8 . Thermal decomposition of H-8 induced methyl migration to give cyclopentene H-9 . Reduction to H-10 ( LiAlH 4 ), oxidation ( chromic acid ) to aldehyde H-11 , Wittig reaction ( carbomethoxymethylenetriphenylphosphorane ) to H-12 and hydrolysis of the ester group finally gave trans -carboxylic acid H-13 . [ 1 ] : 527-528 [ note 14 ]
Coupling of ring-A and ring-D precursors to "pentacyclenone"
N -acylation of tricyclic aminoketone (+)-H-3 with the chloride H-14 of carboxylic acid H-13 gave amide H-15 , which on treatment with potassium t -butoxide in t -butanol stereoselectively produced pentacyclic keto-lactam H-16 via an intramolecular Michael reaction which directs the indicated hydrogen atoms in trans relationship to each other. In anticipation of the Birch reduction of the aromatic ring, protective groups for the two carbonyl functions of H-16 were required, one for the ketone carbonyl group as ketal H-17 , and the other for the lactam carbonyl as the highly sensitive enol ether H-20 . The latter protection was achieved by treatment of H-17 with Meerwein salt (triethyloxonium tetrafluoroborate) to give iminium salt H-18 , followed by conversion to orthoamide H-19 ( NaOMe /MeOH), and finally expelling one molecule of methanol by heating in toluene. Birch reduction of H-20 ( lithium in liquid ammonia , t -butanol, THF ) provided tetraene H-21 . Treatment with acid under carefully controlled conditions led first to an intermediate dione with the double bond in β,γ position which moved to the conjugated position in dione H-22 , dubbed pentacyclenone . [ 1 ] : 528-531 [ 14 ] : 5
From "pentacyclenone" to "corrnorsterone"
The ethylene ketal protecting group in pentacyclenone H-22 was converted to the ketone group of H-23 by acid-catalyzed hydrolysis . [ 1 ] : 531 The dioxime primarily formed by reaction of diketone H-23 with hydroxylammonium chloride was regioselectively hydrolyzed ( nitrous acid /acetic acid) to the desired mono-oxime H-24 . This is the oxime of the sterically more hindered ketone group, the nitrogen atom of which is destined to become the nitrogen of the target molecule's ring D. Crucial for this purpose is the configuration at the monoxime double bond, the hydroxyl group occupying the sterically less hindered position. [ 1 ] : 532 The C,C double bonds of both the cyclopentene and the cyclohexenone ring in H-24 were then cleaved by ozonolysis (ozone at 80 °C in MeOH, periodic acid ), and the carboxylic group formed esterified with CH 2 N 2 ) to diketone H-25 . An intramolecular aldol condensation of the 1,5-dicarbonyl unit in MeOH using pyrrolidine acetate as the base, followed by tosylation of the oxime's hydroxyl group, afforded the cyclohexenone derivative H-26 . A second ozonolysis in wet methyl acetate , followed by treatment with periodic acid and CH 2 N 2 gave H-27 . Beckmann rearrangement (MeOH, sodium polystyrene sulfonate, 2 hrs, 170 °C) produced regioselectively [ 1 ] : 532 lactam H-27a (not isolated) which reacted further in an amine-carbonyl condensation → aldol condensation cascade to the tetracycle H-28 , [ 1 ] : 533-534 called α-corrnorsterone , implicating it as a "cornerstone" [ 1 ] : 534 in the synthesis of the desired A-D-component. [ 1 ] : 531-537
This compound required strongly alkaline conditions in order to open its lactam ring, but it was discovered that a minor isomer , also isolated from the reaction mixture, β-corrnorsterone H-29 , undergoes this lactam ring opening under alkaline condition with great ease. [ 1 ] : 536 Structurally, the two isomers differ only in the orientation of the propionic acid side chain at ring A: the β-isomer has the more stable trans-orientation of this chain relative to the neighboring acetic acid chain formed after opening of the lactam ring. Equilibration of α-corrnorsterone H-28 by heating in strong base, followed by acidification and treatment with diazomethane , led to the isolation of pure β-corrnorsterone H-29 in 90 % yield. [ 1 ] : 537 The correct absolute configuration of the six contiguous asymmetric centers in β-corrnorsterone was confirmed by an x-ray crystal structure analysis of bromo-β-corrnorsterone [ 71 ] [ 1 ] : 529 with the "unnatural" configuration. [ 1 ] : 538 [ 14 ] : 8 [ 4 ] : 0:49:20-0:50:42
Synthesis of the A-D-component carrying the propionic acid function at ring D as methoxycarbonyl group (model A-D-component)
Treatment of β-corrnorsterone H-29 with methanolic HCl cleaved the lactam ring and produced an enol ether derivative named hesperimine [ note 15 ] H-30u . Ozonolysis to aldehyde H-32u , reduction of the aldehyde group with NaBH 4 in MeOH to the primary alcohol H-33u and, finally, conversion of the hydroxy group via the corresponding mesylate gave bromide H-34u . This constitutes the model A-D-component, the one with an undifferentiated propionic acid function at ring D (i.e., bearing a methyl ester group like all other side chains). [ 1 ] : 539-540
Synthesis of the A-D-component carrying the propionic acid function at ring D as nitrile group
Conversion of β-corrnorsterone H-29 to the proper A-D-component H-34 [ 1 ] : 538-539 containing the carboxyl function of the ring D propionic acid side chain as a nitrile group, differentiated from all the other methoxycarbonyl groups, involved the following steps: treatment of H-29 with a methanolic solution of thiophenol and HCl afforded phenyl-thioenolether derivative H-30 , which upon ozonolysis at low temperature gave the corresponding thioester - aldehyde H-31 and, when followed by treatment with liquid ammonia, the amide H-32 . Reduction of the aldehyde group with NaBH 4 to H-33 , mesylation of the primary hydroxy group with methanesulfonic anhydride under conditions that also convert the primary amide group into the desired nitrile group and, finally, replacement of the methansulfonyloxy group by bromide produced A-D-component H-34 with the propionic acid function at ring D as nitrile, differentiated from all other such side chains. [ 1 ] : 539-540 [ 4 ] : 1:01:56-1:19:47
The construction of the corrin chromophore with its three vinylogous amidine units constitutes – besides the direct single bond connection between the rings A and D – the central challenge to any attempt to synthesize vitamin B 12 . The very first approach to a total synthesis of vitamin B 12 launched by Cornforth [ 45 ] : 261-268 was discontinued when confronted with the task of coupling synthesized ring precursors. [ 18 ] : 1493,1496 Coupling the Harvard A-D-components with the ETH B-C-component required extensive exploratory work, this in spite of the knowledge gained in the ETH model syntheses of less complex (i.e., less peripherally substituted) corrins. What might be called an epic engagement for formally making just two C,C bonds lasted from early 1967 [ 18 ] : 1557 until June 1970. [ 2 ]
Both at ETH and Harvard, extensive model studies on the coupling of simplified enaminoid analogues of the A-D-component with the (ring C) imino- and thio-iminoester derivative of the full-fledged B-C-component had consistently shown that a coupling of the Harvard and the ETH components could hardly be achieved by the method that had been so successful in the synthesis of the simpler corrins, namely, by an intermolecular enamino-imino(or thio-imino)ester condensation [ 7 ] [ 8 ] [ 18 ] : 1561 [ 62 ] : 41-58 [ 1 ] : 544 [ 4 ] : 1:25:02-1:26:26 The outcome of these model studies determined the final structure type of a Harvard A-D-component: a structure capable of acting as a component of a C/D-coupling by sulfide contraction via alkylative coupling , [ 8 ] : 384-386 [ 47 ] i.e., the bromide H-34u . [ 7 ] : 18-22 [ 62 ] : 47,51-52 This method had already been implemented by the ETH group in the synthesis of the B-C-component . [ 33 ] : 16-19 [ 37 ] : 1927-1941 [ 18 ] : 1537-1540
An extensive search for optimal conditions, first for a C/D-coupling of a A-D-component with the ETH B-C-component E-19 , then for conditions of the subsequent intramolecular A/B-corrin-ring closure was pursued in both laboratories, using the f-undifferentiated model A-D-component [ note 7 ] H-34u [ 1 ] : 540 as a model. [ 2 ] : 287-300 [ 18 ] : 1561-1564 As the result of work by Yoshito Kishi at Harvard, [ 2 ] : 290 [ 18 ] : 1562 [ 14 ] : 11-12 and Peter Schneider at ETH, [ 48 ] : 12,22-29 [ 18 ] : 1563-1564 optimal conditions for the C/D-coupling were eventually found at Harvard, while the first and most reliable method for the corrin-ring closure between rings A and B was developed at ETH. [ 18 ] : 1562 The procedures of C/D-coupling and A/B-corrin-ring closure developed in this model series were later applied to the corresponding steps in the f-differentiated series as parts of the cobyric acid synthesis.
Synthesis of dicyano-cobalt(III)-5,15-bisnor-a,b,c,d,e,f,g-heptamethyl-cobyrinate from the ring-D undifferentiated model A-D-component
D/C coupling. [ 7 ] : 22-23 [ 2 ] : 287-292 [ 48 ] : 12,22-28 [ 18 ] : 1561-1562
The key problem in this step was the lability of the primary coupling product, thioether HE-35u , isomerizing to other thioethers at first not amenable to sulfide contraction in a reproducible procedure with acceptable yields. [ 2 ] : 287-290 [ 4 ] : 1:26:59-1:32:00 Induced by potassium t -butoxide in THF/ t -butanol under rigorously controlled conditions with strict exclusion of air and moisture, the model A-D-component H-34u smoothly reacted with the B-C-component E-19 [ 48 ] : 53-58 to give the sulfur-bridged coupling product HE-35u , named "thioether type I", in essentially quantitative yield. [ 2 ] : 287-288 However, this product could be isolated only under very carefully controlled conditions, since it equilibrates with extreme ease (e.g., chromatography, or traces of trifluoroacetic acid in methylenechloride solution) to the more stable isomeric thioether HE-36u (thioether type II) which contains, in contrast to thioether type I, the π-system of a conjugatively stabilized vinylogous amidine. [ 2 ] : 289 Depending on conditions, still another isomer HE-37u (thiother type III) was observed. [ 2 ] : 290 Starting with such mixtures of coupling products, at ETH a variety of conditions (e.g. methyl-mercury complex, BF 3 , triphenylphosphine [ 48 ] : 58-65 [ 2 ] : 291 ) were found to induce (via HE-38u ) the contraction step to HE-39u in moderate yields. [ 18 ] : 1562 [ 2 ] : 287-292 With the choice of the solvent found to be crucial, [ 4 ] : 1:34:52-1:35:12 the optimal procedure at Harvard was heating thiother type II HE-36u in sulfolane in the presence of 5.3 equivalents trifluoroacetic acid and 4.5 equivalents of tris-(β-cyanoethyl)-phosphine at 60 °C for 20 hours, producing HE-39u in up to 85% yield. [ 2 ] : 292 [ 48 ] : 65-72 Later it was discovered that nitromethane could also be used as solvent. [ 4 ] : 1:34:52-1:35:13 [ 48 ] : 28
A/B-ring closure. [ 2 ] : 293-300 [ 48 ] : 12,29-39 [ 18 ] : 1562-1564
The problem of corrin-ring closure between rings A and B was solved in two different ways, one developed at ETH, the other pursued at Harvard. [ 32 ] : 19 Both methods correspond to procedures developed before in the synthesis of metal complexes [ 72 ] as well as free ligands [ 73 ] of simpler corrins. [ 7 ] : 25-28 [ 8 ] : 387-389 [ 18 ] : 1563 In the explorations of ring-closure procedures for the much more highly substituted A/B-seco-corrinoid intermediate HE-39u , the ETH group focused on the intramolecular version of the oxidative sulfide contraction method, eventually leading to the dicyano-cobalt(III)-complex HE48u . [ 48 ] : 29-39 [ 2 ] : 297-299 This first totally synthetic corrinoid intermediate was identified with a corresponding sample derived from vitamin B 12 . [ 18 ] : 1563
At Harvard, it was shown that the closure to the corrin macrocycle could also be realized by the method of thioiminoester/enamine condensation. [ 2 ] : 299-300 All reactions described here had to be executed on a very small scale, with "... the utmost rigour in the exclusion of oxygen from the reaction mixtures" [ 2 ] : 296 , and most of them also under strict exclusion of moisture and light, demanding very high standards of experimental expertise. [ 2 ] : 304
The major obstacle in achieving an A/B-corrin-ring closure was the exposure of the highly unstable ring B exocyclic methylidene double bond, which tends to isomerize into a more stable, unreactive endocyclic position with great ease. [ 48 ] : 86,97-98 [ 2 ] : 293-294 [ 3 ] : 161 [ 18 ] : 1562
The problem was solved at ETH [ 18 ] : 1562-1563 [ 48 ] : 29-39,126-135 by finding that treatment of the thiolactone-thiolactam intermediate HE-40u (obtained from HE-39u by reacting with P 2 S 5 [ 48 ] : 73-83 ) with dimethylamine in dry MeOH (room temperature, exclusion of air and light) smoothly opens the thiolactone ring at ring B, forming by elimination of H 2 S the exocyclic methylidene double bond as well as a dimethylamino-amide group in the acetic acid side chain. [ 48 ] : 32-34,96-99 These conditions are mild enough to prevent double bond tautomerization to the thermodynamically more stable isomeric position in the ring. Immediate conversion with a Zn-perchlorate-hexa(dimethylformamide) complex in methanol to zinc complex HE-41u , followed by oxidative coupling (0,05 mM solution of I 2 / KI in MeOH, 3 h) afforded HE-42u . [ 48 ] : 100-105 Sulfide contraction (triphenylphosphine, trifluoroacetic acid, 85 °C, exclusion of air and light) followed by re-complexation with Zn(ClO 4 ) 2 (KCl, MeOH, diisopropylamine ) led to the chloro-zinc complex HE-43u . [ 48 ] : 105-116 The free corrinium salt formed when HE-43u was treated with trifluoroacetic acid in acetonitrile was re-complexed with anhydrous CoCl 2 in THF to the dicyano-cobalt(III)-complex HE-44u . [ 48 ] : 117-125 [ 2 ] : 295 Conversion of the dimethylamino-amide group in the acetic acid side chain of ring B into the corresponding methylester group ( O -methylation by trimethyloxonium tetrafluoroborate , followed by decomposition of the iminium salt with aqueous NaHCO 3 ) afforded totally synthetic 5,15-bisnor-heptamethyl cobyrinate HE-48u . [ 48 ] : 11,117-125 A crystalline sample of HE-48u was identified via UV/VIS , IR , and ORD spectra with a corresponding crystalline sample derived from vitamin B 12 [ 48 ] : 42,135-141 [ 55 ] : 14,64-71,78-90 [ 2 ] : 287,301-303 [ 3 ] : 146-150 [ 74 ]
Later at Harvard, [ 2 ] : 299-300 the A/B-corrin-ring closure was also achieved by converting the thiolactone-thiolactame intermediate HE-40u to thiolactone-thioiminoester HE-45u by S -methylation of the thiolactam sulfur (MeHgOi-Pr, then trimethyloxonium tetrafluoroborate). The product HE-45u was subjected to treatment with dimethylamine (as in the ETH variant), forming the highly labile methylidene derivative HE-46u , which then was converted with anhydrous CoCl 2 in THF to dicyano-cobalt(III) complex HE-47u , the substrate ready to undergo the (A⇒B)-ring closure by a thioiminoester/enamine condensation. A careful search at Harvard for reaction conditions led to a procedure (KO- t -Bu, 120 °C, two weeks) that gave corrin Co complex HE-44u , identical with and in overall yields comparable with HE-44u obtained by the ETH variant of the sulfide contraction procedure. [ 2 ] : 300 Since in corrin model syntheses such a C,C-condensation required induction by a strong base, its application in a substrate containing seven methylester groups was not without problems; [ 18 ] : 1562 in a, milder reactions conditions were applied. [ 3 ] : 162
Synthesis of dicyano-cobalt(III)-5,15-bisnor-a,b,d,e,g-pentamethyl-cobyrinate-c- N,N -dimethylamide-f-nitrile (the common corrinoid intermediate) from the ring-D-differentiated A-D-component
The A-D-component H-34 [ note 8 ] with its propionic acid function at ring D differentiated from all the other carboxyl functions as nitrile group had become available at Harvard in spring 1971. [ 51 ] : 23 As a result of the comprehensive exploratory work that had been done with the model A-D-component at Harvard and ETH, [ 2 ] : 288-292 [ 48 ] : 22-28 [ 18 ] : 1561-1562 joining the proper A-D-component H-34 with the B-C-component E-19 by three operations H-34 + E-19 →→ HE-36 → HE-39 . [ 3 ] : 158-159 [ 4 ] : 1:19:48-1:36:15
Closing the corrin ring was achieved in the sequence HE-39 (P 2 S 5 , xylene , γ-picoline )→ HE-40 [ 4 ] : 1:36:45-1:37:49 → HE-41 [ 4 ] : 1:37:51-1:42:33 → HE-42 [ 4 ] : 1:42:35-1:44:34 → HE-43 (overall yield "about 60 %" [ 4 ] : 1:44:35-1:46:32 ), and finally to cobalt complex HE-44 . [ 4 ] : 1:46:34-1:52:51 [ 3 ] : 160-166 Reactions in this sequence were based on the procedures developed in the undifferentiated model series . [ 2 ] : 293-300 [ 48 ] : 29-39 [ 18 ] : 1562-1564 Two methods were available for the A/B-ring closure: oxidative sulfide contraction within a zinc complex, followed by exchange of zinc by cobalt (ETH [ 3 ] : 162-165 ), or the Harvard alkylative variant of a sulfide contraction, [ 3 ] : 160-162 thio- iminoester / enamine condensation of the cobalt complex (improved reaction conditions: diazabicyclononanone in DMF, 60 °C, several hours [ 3 ] : 162 ). Woodward preferred the former one: [ 3 ] : 165 "...the oxidative method is somewhat superior, in that it is relatively easier to reproduce, .... ". [ 4 ] : 1:52:37-1:53:06
The corrin complex dicyano-cobalt(III)-5,15-bisnor-pentamethyl-cobyrinate-c- N,N -dimethylamide-f-nitrile HE-44 took up the role of the common corrinoid intermediate in the two approaches to cobyric acid synthesis: HE-44 ≡ E-37 . Due to the high configurational lability of C-H chirogenic centers C-3, C-8 and C-13 [ 4 ] : 1:21:49-1:23:42,1:35:43-1:36:14,1:51:51-1:52:30 at the ligand periphery in basic or acidic milieu, separation by HPLC was indispensable for isolation, purification and characterization of pure diastereomers of this and the following corrinoid intermediates. [ 3 ] : 165-166 [ 9 ] : 88-89 [ 4 ] : 1:53:07-2:01:24
Starting material for the synthesis of a ring-C precursor was (+)- camphorquinone H-35 [ note 16 ] which was converted to the acetoxy-trimethylcyclohexene-carboxylic acid H-36 by BF 3 in acetic anhydride , a reaction pioneered by Manasse & Samuel in 1902, [ 75 ] , already successfully applied in a previous synthesis of the ring-C precursor by Pelter and Cornforth. [ note 6 ] Conversion of H-36 to amide H-37 was followed by its ozonolysis to peroxide H-38 which was reduced to the keto- succinimide H-46 by zinc and MeOH. Treatment with methanolic HCl gave lactam H-40 , followed by thermal elimination of methanol to the ring-C precursor H-41 [ 1 ] : 540-542 [ 48 ] : 49-50 [ 14 ] : 4-5,15 This was found to be identical with the ring-C precursor E-13 prepared by a different route [ note 5 ] at ETH. [ 61 ] : 32 [ 44 ] : 30,33-34,81
In the A/D approach to the synthesis of cobyric acid, the four ring precursors (ring-C precursor only formally so [ 12 ] : ref. 22 ) derive from the two enantiomers of one common chiral starting material. All three vinylogous amidine bridges that connect the four peripheral rings were constructed by the sulfide contraction method , with the B-C-component – already prepared for the A/B-approach – serving as an intermediate. [ 12 ] [ 11 ] The photochemical A/D-secocorrin→corrin cycloisomerization, by which the corrin ring was closed between rings A and D, is a novel process, targeted and found to exist in a model study ( cf. fig. 2 ). [ 36 ] [ 37 ] : 1943-1948
Syntheses of the ring-B precursor
Two syntheses of ring-B precursor (+)-E-5 were realized; the one starting from 2-butanone was used further. [ 6 ] : 188 Two pathways for the conversion of the ring-B precursor into the ring-C precursor (+)-E-5 → (−)-E-13 ≡ H-41 were developed, one at ETH , [ 44 ] : 15-39 [ 1 ] : 544 , and one at Harvard. [ 6 ] : 193 [ note 17 ] These conversions turned out to be inadequate for producing large amounts of ring-C-precursor. [ 46 ] : 38 [ 18 ] : 1561 However, the pathway developed at ETH served the purpose of determining the absolute configuration of the ring-B precursor. [ 6 ] : 193 [ 61 ] : 32 Bulk amounts of ring-C precursor to be used for the production of the B-C-component at ETH [ 44 ] : 40 [ 6 ] : 193 [ 33 ] : 631 were prepared at Harvard from (+)-camphor by a route originally developed by Pelter and Cornforth . [ note 6 ]
Ring-B precursor from 2-butanone and glyoxylic acid. Aldol condensation between 2-butanone and glyoxylic acid by treatment with concentrated phosphoric acid ) gave stereoselectively ( trans )-3-methyl-4-oxo-2-pentenoic acid E-1 . [ 39 ] : 11-20,45-45 Diels-Alder reaction of E-1 with butadiene in benzene in the presence of SnCl 4 afforded the racemate of the chiral Diels-Alder adduct E-2 which was resolved into the enantiomers by sequential salt formation with both (−)- and (+)- 1-phenylethylamine . [ 43 ] : 22,59-62 The chirogenic centers of the (+)- enantiomer (+)-E-2 possessed the absolute configuration of ring B in vitamin B 12 . [ 60 ] : 35 [ 6 ] : 191 Oxidation of this (+)-enantiomer with chromic acid in acetone in the presence of sulfuric acid afforded the dilactone (+)-E-3 of the intermediary tricarboxylic acid E-3a . [ 43 ] : 35,72-73 Thermodynamic control of dilactone formation leads to the cis -configuration of the ring junction. [ 43 ] : 32-34 Elongation of the acetic acid side chain of (+)-E-3 by the Arndt-Eistert reaction (via the corresponding acid chloride and diazoketone) gave dilactone (+)-E-4 . [ 61 ] : 15-16,65-67 Treatment of (+)-E-4 with NH 3 in MeOH at room temperature formed a dual mixture of isomeric lactam - lactones in a ratio of 2:1, with ring-B precursor (+)-E-5 predominating (isolated in 55% yield). [ 46 ] : 12-17,57-63 [ 6 ] : 186-188 [ 12 ] [ 1 ] : 542-543 The isomeric lactam-lactone could be isomerized to (+)-E-5 by treatment in methanolic HCl. [ 61 ] : 24-26,81-84
Alternative synthesis of racemic ring-B precursor from Hagemann's ester: implementation of the amidacetal-Claisen rearrangement. Five steps were needed to transform Hagemann's ester rac - E-6 into the racemate of the lactam-lactone rac - E-5 form of the ring-B precursor. [ 60 ] : 14-31 [ 6 ] : 188-190 The product of the C-methylation step rac - E-6 → rac - E-7 ( NaH , CH 3 I ) was purified via its crystalline oxime . The cis -hydroxy-ester (configuration secured by lactone formation [ 60 ] : 64 ) resulting from the reduction step rac - E-7 → rac - E-8 ( NaBH 4 ) had to be separated from the trans isomer. The thermal rearrangement rac - E-8 → rac - E-9 constitutes the implementation of the amidacetal-Claisen rearrangement in organic synthesis, [ 76 ] [ 60 ] : 36-49 a precedent to Johnson's orthoester-Claisen and Ireland's ester-enolate rearrangement . [ 77 ] Ozonolysis ( O 3 /MeOH, HCOOH / H 2 O 2 ) of the N,N -dimethylamide ester rac - E-9 afforded dilactone acid rac - E-10 , from which two reactions led to lactam-lactone methylester rac - E-7 , the racemate of ring-B precursor (+)-E-7 . [ 60 ] : 57-67
Determination of absolute configuration of (+)-ring-B precursor via its conversion into the (+)-ring-C precursor
The conversion of ring-B precursor into the ring-C precursor was based on a reductive decarbonylation of thiolactone E-12 with chloro-tris-(triphenylphosphino)-rhodium(I). [ 44 ] : 14-32 [ 6 ] : 191-193 [ 12 ] Treatment of a methanolic solution of ring-B precursor (+)-E-5 with diazomethane in the presence of catalytic amounts of sodium methoxide , followed by thermal elimination of methanol, gave methylidene lactam E-11 , which was converted to the thiolactone E-12 with liquid H 2 S containing a catalytic amount of trifluoracetic acid . [ 44 ] : 15-16,56-58 Heating E-12 in toluene with the Rh(I)-complex afforded ring-C precursor (−)-E-13 besides the corresponding cyclopropane derivative E-14 . Ring-C precursors prepared via this route and from (+)-camphor at Harvard [ 1 ] : 540-542 were found to be identical: (−)-E-13 ≡ H-41 . [ 44 ] : 33-34
Ozonolysis of ring-C precursor (−)-E-13 gave succinimide derivative (−)-E-15 . [ 44 ] : 33-35,88-89 This succinimide was found to be identical [ 6 ] : 193 [ 1 ] : 543-544 in constitution and optical rotation (i.e., configuration) with the corresponding succinimide derived from ring C of Vitamin B 12 , isolated after ozonolysis of crystalline heptamethyl-cobyrinate (cobester [ note 9 ] ) prepared from Vitamin B 12 . [ 56 ] : 9-18,67-70
The approach pursued at Harvard for conversion of ring-B precursor into ring-C precursor was based on a photochemical degradation of the acetic acid side chain carboxyl group, starting from (+)-E-7 prepared at ETH. [ note 17 ]
Coupling of ring-B and ring-C precursors to the B-C-component. Implementation of the sulfide contraction C,C-condensation method
The iminoester /enamine C,C- condensation method for constructing the vinylogous amidine system, developed in the model studies on corrin synthesis, [ 28 ] [ 35 ] failed completely in attempts to create the targeted C,C-bond between ring-B precursor (+)-E-5 with ring-C precursor (−)-E-13 to give the B-C-component E-18 . [ 6 ] : 193-194 [ 8 ] : 379 [ 1 ] : 544 The problem was solved by "intramolecularization" of the bond formation process between the electrophilic (thio)iminoester carbon and the nucleophilic methylidene carbon of the enamine system through first oxidatively connecting these two centers by a sulfur bridge, and then achieving the C,C-bond formation by a now intramolecular thio -iminoester/enamine condensation with concomitant transfer of the sulfur to a thiophile. [ 6 ] : 194-197 [ 8 ] : 380-386 [ 18 ] : 1537-1538
Conversion of lactam (+)-E-5 into the corresponding thiolactam E-16 (P 2 S 5 ), [ 46 ] : 20-23,74-75 oxidation of E-16 with benzoyl peroxide in the presence of ring-C precursor (−)-E-13 (prepared at Harvard by the Cornforth route [ note 6 ] ), followed by heating the reaction product E-17 in triethylphosphite (as both solvent and thiophile) afforded B-C-component E-18 as a (not separated) mixture of two epimers (regarding the configuration of the propionic side chain at ring B) in up to 80 % yield. [ 46 ] : 38-43,96-102 [ 33 ] : 16-19 [ 8 ] : 381-383 [ 48 ] : 20-21,50-52
The bracketed formulas in the reaction scheme illustrate the type of mechanism operating in the process: E-16a = primary coupling of E-12 and E-10 to E-13 ; E-17a = extrusion of the sulfur atom (captured by thiophile) to E-14 , where it is left open whether this latter process occurs at the stage of the episulfide. This reaction concept developed at this stage, dubbed sulfide contraction , [ 6 ] : 199 [ 47 ] [ 18 ] : 1534-1541 [ 37 ] : 1927-1941 turned out to make possible the construction of all three meso-carbon bridges of the vitamin's corrin ligand in both approaches of the synthesis. [ 12 ] [ 11 ] [ 2 ] : 288-292,297-300 [ 3 ] : 158-164
The conversion of bicyclic lactone-lactam E-18 into the corresponding thiolactone-thiolactam E-20 was brought about by heating with P 2 S 5 / 4-methylpyridine in xylene at 130 °C; milder condition produced thiolactam-lactone E-19 , used for coupling with the Harvard A-D-components . [ 51 ] : 73-83
Synthesis of ring-D precursor for the A/D approach
The starting material for the ring-D precursor, [ 61 ] : 40-61 [ 63 ] : 17-22 [ 12 ] the (−)- enantiomer of the dilactone-carboxylic acid (−)-E-3 , was prepared from the (−)-enantiomer of the Diels-Alder adduct (−)-E-2 [ note 18 ] by oxydation with chromic acid /sulfuric acid in acetone. [ 43 ] : 35,72-73 Treatment of (−)-E-3 with NH 3 in MeOH gave a lactone-lactam-acid which was esterified with diazomethane to the ester E-21 , [ 61 ] : 104-110 the lactone ring of which was opened with KCN in MeOH to give E-22 . [ 61 ] : 114-116 Conventional conditions of an Arndt-Eistert reaction ( SOCl 2 : acid chloride, then CH 2 N 2 in THF: diazoketone, treated with Ag 2 O in MeOH) led to an – unforeseen, yet useful – ring closure of the originally formed chain-elongated ester through participation of the cyano group as a neighboring electrophile , affording the bicyclic enamino-ester derivative E-23 . [ 61 ] : 116-120 Hydrolysis with aqueous HCl, accompanied by decarboxylation, and re-esterification with diazomethane gave keto-lactam-ester E-24 . [ 61 ] : 123-126 [ 63 ] : 40-41 Ketalization ( (CH 2 OH) 2 , CH(OCH 3 ) 3 , TsOH ) of E-24 and conversion of this lactam-ester to thiolactam E-25 ( P 2 S 5 ) was followed by reductive removal of the sulfur with Raney nickel , acetylation of the amino group, and hydrolysis of the ketal (AcOH) to afford E-26 . [ 63 ] : 42-59 This was converted by deacetylation of the amino group with HCl, and then by treatment with NH 2 OH/HCl , MeOH/ NaOAc into oxime E-27 . Beckmann fragmentation (HCl, SOCl 2 in CHCl 3 , N-polystyryl-piperidine) of this oxime E-27 produced imino-nitrile E-28 , [ 63 ] : 60-67 which, when treated with bromine (in MeOH, phosphate buffer pH 7.5, -10 °C) gave ring-D precursor E-29 . [ 51 ] : 84-88
Conversion of the ring-B precursor into the ring-A precursor for the A/D approach
The ring-A precursor (−)-E-31 required in the A/D approach is a close derivative of ring-B precursor (+)-E-5 . Its preparation from (+)-E-5 required opening of the lactone group ( KCN in MeOH), followed by re-esterification with diazomethane to E-30 , then conversion of the lactam group into a thiolactam group with P 2 S 5 to yield (−)-E-31 . [ 51 ] : 63-72 [ 12 ]
Coupling of the B-C-component with ring-D and ring-A precursors
The most efficient way of attaching the two rings D and A to the B-C-component E-18 was to convert E-18 directly into its thiolactam- thiolactone derivative E-20 and then to proceed by first coupling ring-D precursor E-29 to ring C, and then ring-A precursor E-31 to ring B, both by the sulfide contraction method. [ 51 ] : 26-31 [ 9 ] : 80-83 [ 12 ] The search for the reaction conditions for these attachments was greatly facilitated by exploratory work done on the two sulfide contraction steps in the A/B approach model study . [ 51 ] : 27 [ 48 ] : 22-39 [ 2 ] : 285-300
Attachment of ring-D precursor E-29 to the ring-C thiolactam in E-20 by sulfide contraction via alkylative coupling ( t -BuOK in t -BuOH/THF, tris-(β-cyano-ethyl)-phosphin/ CF 3 COOH in sulfolane ) afforded the B/C/D-sesqui-corrinoid E-32 . [ 51 ] : 89-97 To attach ring-A precursor E-31 , the ring B of E-32 was induced to expose its exocyclic methylidene double bond by treatment with dimethylamine in MeOH (using the method [ note 19 ] developed by Schneider [ 48 ] : 32-34 ) forming E-33 [ 51 ] : 108-115 which was subjected to the following cascade of operations: [ 51 ] : 130-150 iodination ( N -iodosuccinimide , CH 2 Cl 2 , 0°), coupling with the thiolactam sulfur of the ring-A precursor E-31 [(CH 3 ) 3 Si] 2 N-Na in benzene/ t -BuOH), complexation (Cd(ClO 4 ) 2 in MeOH), treatment with triphenylphosphine /CF 3 COOH in boiling benzene (sulfide contraction) and, finally, re-complexation with Cd(ClO 4 ) 2 / N,N -diisopropylethylamine in benzene/MeOH). These six operations, all carried out without isolation of intermediates , gave A/D-seco-corrin complex E-34 as mixture of peripheral epimers (separable via HPLC [ 51 ] : 143-147 ) in 42-46 % overall yield. [ 51 ] : 139
A/D-corrin-ring closure by the photochemical A/D-seco-corrin→corrin cycloisomerization to dicyano-cobalt(III)-5,15-bisnor-a,b,d,e,g-pentamethyl-cobyrinate-c- N,N -dimethylamide-f-nitrile (the common corrinoid intermediate)
The conditions and prerequisites for the final (A⇒D)- corrin -ring closure were taken over from extensive corrin model studies. [ 36 ] [ 78 ] [ 9 ] : 71-74,83-84 [ 18 ] : 1565-1566 [ 37 ] : 1942-1962 Problems specific to the cobyric acid synthesis that had to be tackled were: [ 9 ] : 84-88 the possible formation of two diastereomeric A/D- trans -junctions in the ring closure, [ 51 ] : 37-38 exposure of the methylidene double bond at ring A of the A/D-seco-corrin E-34 in a labile Cd complex, [ 51 ] : 35-36 [ 18 ] : 1566 and epimerizability of the peripheral stereogenic centers C-3, C-8 and C-13 before and after ring closure. [ 51 ] : 39 [ 3 ] : 148-150
In the application of this novel process in the A/D approach of the cobyric acid synthesis, [ 9 ] : 86-95 [ 51 ] : 39-53 [ 12 ] : 1419 the reaction proceeded most efficiently and with highest coil stereoselectivity in favor of the natural A/D- trans junction in an A/D-seco-corrin cadmium complex. [ 51 ] : 42-45 [ 3 ] : 166 Treatment of Cd-complex E-34 as mixture of peripheral epimers with 1,8-Diazabicyclo(5.4.0)undec-7-ene in sulfolane at 60 °C under strict protection against light to eliminate the cyano group at ring A, directly followed by re-treatment with Cd(ClO 4 ) 2 , led to labile [ 51 ] : 172 A/D-seco-corrin complex E-35 as a mixture of peripheral epimers. This was directly subjected to the key step, the photochemical ring closure reaction under rigorous exclusion of air: [ 51 ] : 40 visible light, under Argon , MeOH, AcOH, 60 °C. Product of the A/D-ring closure was the free corrin ligand E-36 , as the originally formed Cd-corrinate – in contrast to the Cd- seco -corrinate E-35 – decomplexes in the reaction medium. [ 51 ] : 173 [ 12 ] : 1419 Corrin E-36 was immediately complexed ( CoCl 2 , [ 18 ] : 1499-1500,1563-64 KCN , air, H 2 O, CH 2 Cl 2 ) and finally isolated (thick-layer chromatography ) as mixture of peripheral epimers in 45-50 % yield over four operations: [ 51 ] : 169-179 the common corrinoid intermediate dicyano-cobalt(III)-complex E-37 ≡ HE-44 . [ note 20 ]
HPLC analysis of this mixture E-37 showed the presence of six epimers with natural ligand helicity (Σ 95%, CD spectra ), among them 26% of natural diastereomer 3α,8α,13α, and an equal amount of its C-13 neo -epimer 3α,8α,13β. [ 51 ] : 46,179-186 [ 12 ] : 1414 Two HPLC fractions (Σ 5%) contained diastereomers with unnatural ligand helicity, as shown by inverse CD spectra. [ 51 ] : 42-43 Product mixtures from several such cycloisomerizations were combined for preparative HPLC separation and full characterization of the 14 isolated diastereomers of E-37 [ 51 ] : 207-251 (of 16 theoretically possible, regarding helicity and the epimeric centers C-3, C-8, C-13 [ 51 ] : 39 ).
In an analytical run, the mixture of cadmium-seco-complex epimers E-35 was separated by HPLC (in the dark) into the natural chloro-cadmium-3α,8α,13α-A/D-seco-corrinate diastereomer (ααα)-E-35 and four other epimer fractions [ 51 ] : 281-293 Upon irradiation [ 51 ] : 53 [ 12 ] and following cobaltation, (ααα)-E-35 produced E-37 in yields of 70-80% as an essentially dual mixture of mainly the 3α,8α,13α epimer, besides some 3α,8α,13β epimer. Less than 1% of fractions with unnatural coil were formed (HPLC, UV/VIS , CD ). [ 51 ] : 293-300
Mechanistically , the photochemical A/D-seco-corrin corrin cycloisomerization involves an antarafacial sigmatropic shift of the α-hydrogen of the CH 2 position C-19 at ring D to the CH 2 position of the methylidene group at ring A within a triplet excited state , creating a transient 15-center-16-electron π-system (see E-35a in fig. 27 ) that antarafacially collapses between positions C-1 and C-19 to the corrin system. [ 36 ] [ 37 ] : 1946,1967-1993 [ 79 ] The coil selectivity of the ring closure in favor of the corrin ligand's natural helicity is interpreted as relating to the difference in steric hindrance between the g-methoxycarbonyl acetic acid chain at ring D and the methylidene region of ring A in the two possible helical coil configurations of the A/D-seco-corrin complex (fig. 28). [ 51 ] : 38 [ 37 ] : 1960-1962
The final steps from the common corrinoid intermediate E-37/HE-44 to cobyric acid E-44/HE-51 were carried out by the two groups collaboratively and in parallel, the ETH group working with material produced by the A/D approach , and the Harvard group with that from the A/B approach . [ 63 ] : 15 [ 55 ] : 22 [ 57 ] : 47 [ 14 ] : 12 [ 18 ] : 1570-1571 What the two groups in fact accomplished thus were the common final steps of two different syntheses. [ 11 ] [ 12 ]
The tasks in this end phase of the project were the regioselective introduction of methyl groups at the two meso positions C-5 and C-15 of E-37/HE-44 , followed by conversion of all its peripheral carboxyl functions into primary amide groups, excepting that in side chain f at ring D, which had to end up as free carboxyl. These conceptually simple finishing steps turned out to be rather complex in execution, including unforeseen pitfalls like a dramatic loss of precious synthetic material in the so-called "Black Friday" (July 9, 1971). [ 55 ] : 39-40,107-118 [ 9 ] : 97-99 [ 3 ] : 168-169 [ 5 ] : 0:07:54-0:09:33 [ 18 ] : 1568-1569
This introduction of methyl groups could draw on exploratory studies on model corrins [ 7 ] : 13-14 [ 8 ] : 375-377 [ 80 ] [ 18 ] : 1528,1530-1532 as well as on exploratory experiments carried out at ETH on cobester [ note 9 ] and its (c→C-8)-lactone derivative. [ 55 ] : 27-43 Chloromethyl benzyl ether alkylated the meso position C-10 of cobester, but not that of the corresponding lactone , the difference in behavior reflecting the difference in steric hindrance exerted on the meso position C-10 by its neighboring substituents. [ 55 ] : 37-39 This finding was decisive for the choice of the substrate to be used for introducing methyl groups at meso positions C-5 and C-10 of E-37/HE-44 . [ 9 ] : 96-99 [ 55 ] : 19 [ 3 ] : 167 [ 18 ] : 1567-1568 In this final phase of the synthesis, HPLC again turned out to be absolutely indispensable for separation, isolation, characterization and, above all, identification of pure isomers of dicyano-cobalt(III)-complexes of totally as well as partially synthetic origin. [ 9 ] : 96-102 [ 3 ] : 165 [ 55 ] : 61-63 [ 5 ] : 0:21:13-0:25:28 [ 18 ] : 1566-1567
The first step was to convert the c- N,N -dimethylcarboxamide group of E-37/HE-44 into the (c→C-8)-lactone derivative E-38/HE-45 by treatment with iodine /AcOH effecting iodination at C-8, followed by intramolecular O -alkylation of the carboxamide group to an iminium salt that hydrolyzes to the lactone. [ 63 ] : 23,90-108 [ 3 ] : 166-167 [ 4 ] : 2:02:18-2:09:02 This lactonization leads to cis -fused rings. [ 55 ] : 19 [ 5 ] : 0:09:34-0:10:43 Reaction of (c→C-8)-lactone E-38/HE-45 with chloromethyl benzyl ether in acetonitrile in the presence of LiCl gave, besides mono-adduct, the bis-benzyloxy adduct E-39/HE-46 . When treated with thiophenol , this produced the bis-phenylthio-derivative E-40/HE-47 . Treatment with Raney nickel in MeOH not only set free the two methyl groups at the meso positions, but also reductively opened the lactone ring to the free c-carboxyl group at ring B, producing the correct α- configuration at C-8. Esterification of c-carboxyl with diazomethane afforded hexamethylester-f-nitrile E-41/HE-48 . [ 55 ] : 19-21,39-43,146-205 [ 3 ] : 167-169 For steric reasons, only the predominant [ 55 ] : 19 [ 63 ] : 24 [ 4 ] : 2:08:20-2:09:02 C-3 α-epimer (with the C-3 side chain below the plane of the corrin ring) reacted to a 5,15- disubstituted product E-38/H-45 , the reaction thus amounting to a chemical separation of the C-3 epimers. [ 55 ] : 40 [ 5 ] : 0:12:51-0:14:33,0:15:56-0:16:24
In improved procedures developed at Harvard later in 1972, [ 18 ] : 1569 footnote 62 the reagent chloromethyl benzyl ether was replaced by formaldehyde /sulfolane/HCl in acetonitrile for the alkylation step, and Raney nickel in the reduction step was replaced by zinc/acetic acid to give E-41/HE-48 . [ 5 ] : 0:00:32-0:21:12
Concentrated H 2 SO 4 at room temperature converted the nitrile function of pure (3α,8α,13α)-E-41/HE-48 into the primary f- amide group of E-42/HE-49 , besides partial epimerization at C-13; [ 9 ] : 100-103 [ 55 ] : 21,134-136 [ 3 ] : 150-151,169-170 an alternative procedure for the selective f-nitrile→f-amide conversion ( BF 3 in CH 3 COOH) later developed at Harvard proceeded without epimerization at C-13. [ 18 ] : 1569 footnote 62 [ 5 ] : 0:46:40-0:49:45 [ 55 ] : 21 A crystalline sample of the 3α,8α,13α-epimer of dicyano-cobalt (III)-a,b,c,d,e,g-hexamethyl-cobyrinate-f-amide E-42/HE-49 , isolated by HPLC, was the first totally synthetic intermediate to be chromatographically and spectroscopically identified with a relay sample made from vitamin B 12 . [ 55 ] : 136-141 [ 3 ] : 170
In the remaining steps of the synthesis, only epimerization at C-13 played an important role, [ 55 ] : 19-21 with 13α being the configuration of the natural corrinoids, and 13β known as neo -epimers of vitamin B 12 and its derivatives; [ 3 ] : 169-170 [ 81 ] these are readily separable by HPLC. [ 5 ] : 0:19:30-0:20:21 [ 55 ] : 135,208-209
In the course of 1972, comprehensive identifications (HPLC, UV/VIS , IR , NMR , CD , mass spectra ) of crystalline samples of totally synthetic intermediates with the corresponding compounds derived from vitamin B 12 were carried out in both laboratories: individually compared and identified were the 3α,8α,13α and 3α,8α,13β neo -epimer of f-amide E-42/HE-49 , as well as the corresponding pair of C-13-epimeric nitriles E-41/HE-48 . [ 55 ] : 206-221 [ 57 ] : 46-47 [ 5 ] : 0:27:28-0:46:32 All these dicyano-cobalt(III)-complexes are soluble in organic solvents [ 56 ] : 11 in which the separation power of HPLC by far exceeds that of analytical methods operating in water, [ 55 ] : 44-45 the solvent in which cobyric acid was to be identified, and where it exists as two easily equilibrating aquo-cyano complexes, epimeric regarding the position of the two non-identical axial Co ligands . [ 63 ] : 196-197 [ 57 ] : 49-60
These thorough identifications of the totally synthetic with partially synthetic materials mark the accomplishment of the two syntheses. They also reciprocally provided structure proof for a specific constitutional isomer isolated from a mixture of isomeric mono-amides formed in the partial ammonolysis of the B 12 -derived cobester, [ note 9 ] tentatively assigned to be the 3α,8α,13α-f-amide E-42/HE-49 (see fig. 30). [ 56 ] : 9-18,67-70 [ 55 ] : 226-239 [ 59 ]
The final task of reaching cobyric acid from f-amide E-42/HE-49 required the critical step of hydrolyzing the singular amide function into a free carboxyl function without touching any of the six methoxycarbonyl groups around the molecule's periphery. Since exploratory attempts by the conventional method of amide hydrolysis via nitrosation led to detrimental side reactions at the chromophore , a novel way of " hydrolyzing " the f-amide group without touching the six methylester groups was conceived and explored at ETH: treatment of f-amide E-42/HE-49 (B 12 -derived relay material) with the unusual reagent α-chloro-propyl-(N-cyclohexyl)- nitrone [ 82 ] and AgBF 4 in CH 2 Cl 2 , then with HCl in H 2 O/ dioxane , and finally with dimethylamine in isopropanol afforded the f-acid E-43/HE-50 in 57% yield. [ 63 ] : 24-25,159-172 [ 3 ] : 170-172 [ 5 ] : 0:53:17-0:58:30 Sustained experimentations at Harvard eventually showed the nitrosation method to be successful ( N 2 O 4 , CCl 4 , NaOAc ) and to produce the f-carboxyl group even more effectively. [ 3 ] : 172-173 [ 5 ] : 0:58:19-0:59:15
It was also at Harvard that conditions for the last step were explored, conversion of all remaining ester groups into primary amide groups by ammonolysis . Liquid ammonia in ethylene glycol , in the presence of NH 4 Cl and the absence of oxygen, converted f-carboxy-hexamethylester E-43/HE-50 into f-carboxy-hexa-amide E-44/HE-51 (= cobyric acid). [ 3 ] : 173-175 [ 55 ] : 24 This was crystallized and shown both as the α-cyano-β-aquo and the α-aquo-β-cyano form to be chromatographically and spectroscopically identical with the corresponding forms of natural cobyric acid. [ 5 ] : 0:59:53-1:09:58 [ 3 ] : 175-176 [ 63 ] : 26-27,196-221 At Harvard, the transformation E-43/HE-50 → E-44/HE-51 was eventually carried out starting with f-amide that had been obtained by total synthesis via the A/B approach. [ 57 ] : 47-61 The ETH group contented itself with a corresponding f-amide → cobyric acid conversion and subsequent cobyric acid identification where the actual starting material f-amide was derived from vitamin B 12 . [ 55 ] : 22 [ 63 ] : 15 [ 12 ] : footnote 45 [ 18 ] : 1570-1571
|
https://en.wikipedia.org/wiki/Vitamin_B12_total_synthesis
|
Vitamin D is a group of structurally related, fat-soluble compounds responsible for increasing intestinal absorption of calcium , magnesium , and phosphate , along with numerous other biological functions. [ 1 ] [ 2 ] In humans, the most important compounds within this group are vitamin D 3 ( cholecalciferol ) and vitamin D 2 ( ergocalciferol ). [ 2 ] [ 3 ]
Unlike the other twelve vitamins, vitamin D is only conditionally essential, as with adequate skin exposure to the ultraviolet B (UVB) radiation component of sunlight there is synthesis of cholecalciferol in the lower layers of the skin's epidermis . For most people, skin synthesis contributes more than diet sources. [ 4 ] Vitamin D can also be obtained through diet, food fortification and dietary supplements . [ 2 ] In the U.S., cow's milk and plant-based milk substitutes are fortified with vitamin D 3 , as are many breakfast cereals. Government dietary recommendations typically assume that all of a person's vitamin D is taken by mouth, given the potential for insufficient sunlight exposure due to urban living, cultural choices for the amount of clothing worn when outdoors, and use of sunscreen because of concerns about safe levels of sunlight exposure , including the risk of skin cancer . [ 2 ] [ 5 ] : 362–394
Cholecalciferol is converted in the liver to calcifediol (also known as calcidiol or 25-hydroxycholecalciferol), while ergocalciferol is converted to ercalcidiol (25-hydroxyergocalciferol). These two vitamin D metabolites, collectively referred to as 25-hydroxyvitamin D or 25(OH)D, are measured in serum to assess a person's vitamin D status. Calcifediol is further hydroxylated by the kidneys and certain immune cells to form calcitriol (1,25-dihydroxycholecalciferol; 1,25(OH) 2 D), the biologically active form of vitamin D. [ 3 ] Calcitriol attaches to vitamin D receptors , which are nuclear receptors found in various tissues throughout the body.
The discovery of the vitamin in 1922 was due to an effort to identify the dietary deficiency in children with rickets . [ 6 ] [ 7 ] Adolf Windaus received the Nobel Prize in Chemistry in 1928 for his work on the constitution of sterols and their connection with vitamins. [ 8 ] Present day, government food fortification programs in some countries and recommendations to consume vitamin D supplements are intended to prevent or treat vitamin D deficiency rickets and osteomalacia. There are many other health conditions linked to vitamin D deficiency. However, the evidence for the health benefits of vitamin D supplementation in individuals who are already vitamin D sufficient is unproven. [ 2 ] [ 9 ] [ 10 ] [ 11 ]
(made from 7-dehydrocholesterol in the skin).
(made from 7-dehydrositosterol )
Several forms ( vitamers ) of vitamin D exist, with the two major forms being vitamin D 2 or ergocalciferol, and vitamin D 3 or cholecalciferol. [ 1 ] The common-use term "vitamin D" refers to both D 2 and D 3 , which were chemically characterized, respectively, in 1931 and 1935. Vitamin D 3 was shown to result from the ultraviolet irradiation of 7-dehydrocholesterol. Although a chemical nomenclature for vitamin D forms was recommended in 1981, [ 12 ] alternative names remain commonly used. [ 3 ]
Chemically, the various forms of vitamin D are secosteroids , meaning that one of the bonds in the steroid rings is broken. [ 13 ] The structural difference between vitamin D 2 and vitamin D 3 lies in the side chain : vitamin D 2 has a double bond between carbons 22 and 23, and a methyl group on carbon 24. Vitamin D analogues have also been synthesized. [ 3 ]
The active vitamin D metabolite, calcitriol, exerts its biological effects by binding to the vitamin D receptor (VDR), which is primarily located in the nuclei of target cells. [ 1 ] [ 13 ] When calcitriol binds to the VDR, it enables the receptor to act as a transcription factor , modulating the gene expression of transport proteins involved in calcium absorption in the intestine, such as TRPV6 and calbindin . [ 14 ] The VDR is part of the nuclear receptor superfamily of steroid hormone receptors , which are hormone-dependent regulators of gene expression. These receptors are expressed in cells across most organs. VDR expression decreases as age increases. [ 1 ] [ 4 ]
Activation of VDR in the intestine, bone, kidney, and parathyroid gland cells plays a crucial role in maintaining calcium and phosphorus levels in the blood, a process that is assisted by parathyroid hormone and calcitonin , thereby supporting bone health . [ 1 ] [ 15 ] [ 4 ] VDR also regulates cell proliferation and differentiation . Additionally, vitamin D influences the immune system, with VDRs being expressed in several types of white blood cells, including monocytes and activated T and B cells . [ 16 ]
Worldwide, more than one billion people [ 17 ] - infants, children, adults and elderly [ 18 ] - can be considered vitamin D deficient, with reported percentages dependent on what measurement is used to define "deficient". [ 19 ] Deficiency is common in the Middle-East, [ 18 ] Asia, [ 20 ] Africa [ 21 ] and South America, [ 22 ] but also exists in North America and Europe. [ 23 ] [ 18 ] [ 24 ] [ 25 ] Dark-skinned populations in North America, Europe and Australia have a higher percentage of deficiency compared to light-skinned populations that had their origins in Europe. [ 26 ] [ 27 ] [ 28 ]
Serum 25(OH)D concentration is used as a biomarker for vitamin D deficiency. Units of measurement are either ng/mL or nmol/L, with one ng/mL equal to 2.5 nmol/L. There is no consensus on defining vitamin D deficiency, insufficiency, sufficiency, or optimal for all aspects of health. [ 19 ] According to the US Institute of Medicine Dietary Reference Intake Committee, below 30 nmol/L significantly increases the risk of vitamin D deficiency caused rickets in infants and young children and reduces absorption of dietary calcium from the normal range of 60–80% to as low as 15%, whereas above 40 nmol/L is needed to prevent osteomalacia bone loss in the elderly, and above 50 nmol/L to be sufficient for all health needs. [ 5 ] : 75–111 Other sources have defined deficiency as less than 25 nmol/L, insufficiency as 30–50 nmol/L [ 29 ] and optimal as greater than 75 nmol/L. [ 30 ] [ 31 ] Part of the controversy is because studies have reported differences in serum levels of 25(OH)D between ethnic groups, with studies pointing to genetic as well as environmental reasons behind these variations. African-American populations have lower serum 25(OH)D than their age-matched white population, but at all ages have superior calcium absorption efficiency, a higher bone mineral density, and as elderly, a lower risk of osteoporosis and fractures. [ 5 ] : 439–440 Supplementation in this population to achieve proposed 'standard' concentrations could, in theory, cause harmful vascular calcification . [ 32 ]
Using the 25(OH)D assay as a screening tool of the generally healthy population to identify and treat individuals is considered not as cost-effective as a government-mandated fortification program. Instead, there is a recommendation that testing should be limited to those showing symptoms of vitamin D deficiency or who have health conditions known to cause vitamin deficiency. [ 4 ] [ 25 ]
Causes of insufficient vitamin D synthesis in the skin include insufficient exposure to UVB light from sunlight due to living in high latitudes (farther distance from the equator with resultant shorter daylight hours in winter). Serum concentration by the end of winter can be lower by one-third to half that at the end of summer. [ 5 ] : 100–101, 371–379 [ 4 ] [ 33 ] The prevalence of vitamin D deficiency increases with age due to a decrease in 7-dehydrocholesterol synthesis in the skin and a decline in kidney capacity to convert calcidiol to calcitriol, [ 34 ] the latter seen to a greater degree in people with chronic kidney disease. [ 35 ] Despite these age effects, elderly people can still synthesize sufficient calcitriol if enough skin is exposed to UVB light. Absent that, a dietary supplement is recommended. [ 34 ] Other causes of insufficient synthesis are sunlight being blocked by air pollution, [ 36 ] urban/indoor living, long-term hospitalizations and stays in extended care facilities, cultural or religious lifestyle choices that favor sun-blocking clothing, recommendations to use sun-blocking clothing or sunscreen to reduce risk of skin cancer, and lastly, the UV-B blocking nature of dark skin . [ 24 ]
Consumption of foods that naturally contain vitamin D is rarely sufficient to maintain a recommended serum concentration of 25(OH)D in the absence of the contribution of skin synthesis. Fractional contributions are roughly 20% diet and 80% sunlight. [ 4 ] Vegans had a lower dietary intake of vitamin D and lower serum 25(OH)D when compared to omnivores, with lacto-ovo-vegetarians falling in between due to the vitamin content of egg yolks and fortified dairy products. [ 37 ] Governments have mandated or voluntary food fortification programs to bridge the difference in, respectively, 15 and 10 countries. [ 38 ] The United States is one of the few mandated countries. The original fortification practices, circa the early 1930s, were limited to cow's milk, which had a large effect on reducing infant and child rickets. In July 2016 the US Food and Drug Administration approved the addition of vitamin D to plant milk beverages intended as milk alternatives, such as beverages made from soy, almond, coconut, and oats. [ 39 ] At an individual level, people may choose to consume a multivitamin/mineral product or else a vitamin-D-only product. [ 40 ]
There are many disease states, medical treatments, and medications that put people at risk for vitamin D deficiency. Chronic diseases that increase risk include kidney [ 35 ] and liver failure, Crohn's disease, inflammatory bowel disease, and malabsorption syndromes such as cystic fibrosis, and hyper- or hypo-parathyroidism. [ 24 ] Obesity sequesters vitamin D in fat tissues, thereby lowering serum levels, [ 41 ] but bariatric surgery to treat obesity interferes with dietary vitamin D absorption, also causing deficiency. [ 42 ] Medications interacting with vitamin D metabolism include antiretrovirals, anti-seizure drugs, glucocorticoids, systemic antifungals such as ketoconazole, cholestyramine, and rifampicin. [ 4 ] [ 24 ] Organ transplant recipients receive immunosuppressive therapy that is associated with an increased risk to develop skin cancer, so they are advised to avoid sunlight exposure, and to take a vitamin D supplement. [ 43 ]
Daily dose regimens are preferred to admission of large doses at weekly or monthly schedules, and D 3 may be preferred over D 2 , but there is a lack of consensus as to optimal type, dose, duration or what to measure to deem success. Daily regimens on the order of 4,000 IU/day (for other than infants) have a greater effect on 25(OH)D recovery from deficiency and a lower risk of side effects compared to weekly or monthly bolus doses, with the latter as high as 100,000 IU. The only advantage of bolus dosing could be better compliance, as bolus dosing is usually administered by a healthcare professional rather than self-administered. [ 4 ] While some studies have found that vitamin D 3 raises 25(OH)D blood levels faster and remains active in the body longer, [ 44 ] [ 45 ] others contend that vitamin D 2 sources are equally bioavailable and effective for raising and sustaining 25(OH)D. [ 46 ] [ 47 ] If digestive disorders compromise absorption, then intramuscular injection of up to 100,000 IU of vitamin D 3 is therapeutic. [ 4 ]
Melanin , specifically the sub-type eumelanin , is a biomolecule consisting of linked molecules of oxidized amino acid tyrosine . It is produced by cells called melanocytes in a process called melanogenesis. In the skin, melanin is located in the bottom layer (the stratum basale ) of the skin's epidermis . Melanin can be permanently incorporated into the skin, resulting in dark skin , or else have its synthesis initiated by exposure to UV radiation , causing the skin to darken as a temporary sun tan . Eumelanin is an effective absorbent of light; the pigment can dissipate over 99.9% of absorbed UV radiation. [ 48 ] Because of this property, eumelanin is thought to protect skin cells from sunlight 's ultraviolet A (UVA) and ultraviolet B (UVB) radiation damage, reducing the risk of skin tissue folate depletion, preventing premature skin aging and reducing the risks of sunburn and skin cancer . [ 49 ] Melanin inhibits UVB-powered vitamin D synthesis in the skin. In areas of the world not distant from the equator, abundant, year-round exposure to sunlight means that even dark-skinned populations have adequate skin synthesis. [ 50 ] However, when dark-skinned people cover much of their bodies with clothing for cultural or climate reasons, or are living a primarily indoor life in urban conditions, or live at higher latitudes which provide less sunlight in winter, they are at risk for vitamin D deficiency. [ 24 ] [ 51 ] The last cause has been described as a "latitude-skin color mismatch". [ 50 ]
In the United States, vitamin D deficiency is particularly common among non-white Hispanic and African-American populations. [ 29 ] [ 50 ] [ 52 ] However, despite having on-average 25(OH)D serum concentrations below the 50 nmol/L amount considered sufficient, African Americans have higher bone mineral density and lower fracture risk when compared to European-origin people. Possible mechanisms may include higher calcium retention, lower calcium excretion, and greater bone resistance to parathyroid hormone, [ 50 ] [ 52 ] [ 53 ] also genetically lower serum vitamin D-binding protein which would result in adequate bioavailable 25(OH)D despite total serum 25(OH)D being lower. [ 54 ] The bone density and fracture risk paradox does not necessarily carry over to non-skeletal health conditions such as arterial calcification, cancer, diabetes or all-cause mortality. There is conflicting evidence that in the African American population, 'deficiency' as currently defined increases the risk of non-skeletal health conditions, and some evidence that supplementation increases risk, [ 50 ] [ 52 ] including for harmful vascular calcification. [ 32 ] African Americans, and by extension other dark-skinned populations, may need different definitions for vitamin D deficiency, insufficiency, and adequate. [ 32 ]
Comparative studies carried out in lactating mothers indicate a mean value of vitamin D content in the breast milk of 45 IU/liter. [ 55 ] This vitamin D content is too low to meet the vitamin D requirement of 400 IU/day recommended by several government organizations ("...as breast milk is not a meaningful source of vitamin D." [ 5 ] : 385 ). The same government organizations recommend that lactating women consume 600 IU/day, [ 2 ] [ 56 ] [ 57 ] [ 58 ] but this is insufficient to raise breast milk content to deliver recommended intake. [ 55 ] There is evidence that breast milk content can be increased, but because the transfer of the vitamin from the lactating mother's serum to milk is inefficient, this requires that she consume a dietary supplement above the government-set safe upper limit of 4,000 IU/day. [ 55 ] Given the shortfall, there are recommendations that breast-fed infants be fed a vitamin D dietary supplement of 400 IU/day during the first year of life. [ 55 ] If not breastfeeding, infant formulas are designed to deliver 400 IU/day for an infant consuming a liter of formula per day [ 59 ] - a normal volume for a full-term infant after the first month. [ 60 ]
Vitamin D toxicity, or hypervitaminosis D, is the toxic state of an excess of vitamin D. It is rare, having occurred historically during a time of unregulated fortification of foods, especially those provided to infants, [ 5 ] : 431–432 or in more recently, with consumption of high-dose vitamin D dietary supplements following inappropriate prescribing, non-prescribed consumption of high-dose, over-the-counter preparations, or manufacturing errors resulting in content far in excess of what is on the label. [ 40 ] [ 61 ] [ 62 ] Ultraviolet light alone - sunlight or tanning beds - can raise serum 25(OH)D concentration to a bit higher than 100 nmol/L, but not to a level that causes hypervitaminosis D, the reasons being that there is a limiting amount of the precursor 7-dehydrocholesterol synthesized in the skin and a negative feedback in the kidney wherein the presence of calcitriol induces diversion to metabolically inactive 24,25-hydroxyvitamin D rather than metabolically active calcitriol (1,25-hydroxyvitamin D). [ 63 ] Further metabolism yields calcitroic acid , an inactive water-soluble compound that is excreted in bile . [ 64 ]
There is no general agreement about the intake levels at which vitamin D may cause harm. According to the IOM review, "Doses below 10,000 IU/day are not usually associated with toxicity, whereas doses equal to or above 50,000 IU/day for several weeks or months are frequently associated with toxic side effects including documented hypercalcemia." [ 5 ] : 427 The normal range for blood concentration of 25-hydroxyvitamin D in adults is 20 to 50 nanograms per milliliter (ng/mL; equivalent to 50 to 125 nmol/L). Blood levels necessary to cause adverse effects in adults are thought to be greater than about 150 ng/mL. [ 5 ] : 424–446
An excess of vitamin D causes abnormally hypercalcaemia (high blood concentrations of calcium), which can cause overcalcification of the bones and soft tissues including arteries, heart, and kidneys. Untreated, this can lead to irreversible kidney failure. Symptoms of vitamin D toxicity may include the following: increased thirst , increased urination , nausea, vomiting, diarrhea, decreased appetite, irritability, constipation, fatigue, muscle weakness, and insomnia. [ 65 ] [ 66 ] [ 67 ]
In 2011, the U.S. National Academy of Medicine revised tolerable upper intake levels (UL) to protect against vitamin D toxicity. Before the revision the UL for ages 9+ years was 50 μg/d (2000 IU/d). [ 5 ] : 424–445 Per the revision: "UL is defined as "the highest average daily intake of a nutrient that is likely to pose no risk of adverse health effects for nearly all persons in the general population". [ 68 ] The U.S. ULs in microgram (mcg or μg) and International Units (IU) for both males and females, by age, are:
Although in the U.S. the adult UL is set at 4,000 IU/day, over-the-counter products are available at 5,000, 10,000 and even 50,000 IU (the last with directions to take once a week). The percentage of the U.S. population taking over 4,000 IU/day has increased since 1999. [ 40 ]
In almost every case, stopping the vitamin D supplementation combined with a low-calcium diet and corticosteroid drugs will allow for a full recovery within a month. [ 65 ] [ 66 ] [ 67 ]
Idiopathic infantile hypercalcemia is caused by a mutation of the CYP24A1 gene, leading to a reduction in the degradation of vitamin D. Infants who have such a mutation have an increased sensitivity to vitamin D and in case of additional intake a risk of hypercalcaemia. [ 69 ] The disorder can continue into adulthood. [ 70 ]
Supplementation with vitamin D is a reliable method for preventing or treating rickets . On the other hand, the effects of vitamin D supplementation on non-skeletal health are uncertain. [ 71 ] [ 72 ] A review did not find any effect from supplementation on the rates of non-skeletal disease, other than a tentative decrease in mortality in the elderly. [ 73 ] Vitamin D supplements do not alter the outcomes for myocardial infarction , stroke or cerebrovascular disease , cancer, bone fractures or knee osteoarthritis . [ 10 ] [ 74 ]
A US Institute of Medicine (IOM) report states: "Outcomes related to cancer, cardiovascular disease and hypertension , and diabetes and metabolic syndrome, falls and physical performance, immune functioning and autoimmune disorders , infections, neuropsychological functioning, and preeclampsia could not be linked reliably with intake of either calcium or vitamin D, and were often conflicting." [ 5 ] : 5 Evidence for and against each disease state is provided in detail. [ 5 ] : 124–299 Some researchers claim the IOM was too definitive in its recommendations and made a mathematical mistake when calculating the blood level of vitamin D associated with bone health. [ 75 ] Members of the IOM panel maintain that they used a "standard procedure for dietary recommendations" and that the report is solidly based on the data. [ 75 ]
Vitamin D 3 supplementation has been tentatively found to lead to a reduced risk of death in the elderly, [ 76 ] [ 73 ] but the effect has not been deemed pronounced, or certain enough, to make taking supplements recommendable. [ 10 ] Other forms (vitamin D 2 , alfacalcidol, and calcitriol) do not appear to have any beneficial effects concerning the risk of death. [ 76 ] High blood levels appear to be associated with a lower risk of death, but it is unclear if supplementation can result in this benefit. [ 77 ] Both an excess and a deficiency in vitamin D appear to cause abnormal functioning and premature aging. [ 78 ] [ 79 ] The relationship between serum calcifediol concentrations and all-cause mortality is "U-shaped": mortality is elevated at high and low calcifediol levels, relative to moderate levels. Harm from elevated calcifediol appears to occur at a lower level in dark-skinned Canadian and United States populations than in light-skinned populations. [ 5 ] : 424–435
Rickets, a childhood disease, is characterized by impeded growth and soft, weak, deformed long bones that bend and bow under their weight as children start to walk. Maternal vitamin D deficiency can cause fetal bone defects from before birth and impairment of bone quality after birth. [ 80 ] [ 81 ] Rickets typically appear between 3 and 18 months of age. [ 82 ] This condition can be caused by vitamin D, calcium or phosphorus deficiency. [ 83 ] Vitamin D deficiency remains the main cause of rickets among young infants in most countries because breast milk is low in vitamin D, and darker skin, social customs, and climatic conditions can contribute to inadequate sun exposure. [ citation needed ] A post-weaning Western omnivore diet characterized by high intakes of meat, fish, eggs and vitamin D fortified milk is protective, whereas low intakes of those foods and high cereal/grain intake contribute to risk. [ 84 ] [ 85 ] [ 86 ] For young children with rickets, supplementation with vitamin D plus calcium was superior to the vitamin alone for bone healing. [ 87 ] [ 88 ]
Characteristics of osteomalacia are softening of the bones, leading to bending of the spine, bone fragility, and increased risk for fractures. [ 1 ] Osteomalacia is usually present when 25-hydroxyvitamin D levels are less than about 10 ng/mL. [ 90 ] Osteomalacia progress to osteoporosis , a condition of reduced bone mineral density with increased bone fragility and risk of bone fractures. Osteoporosis can be a long-term effect of calcium and/or vitamin D insufficiency, the latter contributing by reducing calcium absorption. [ 2 ] In the absence of confirmed vitamin D deficiency there is no evidence that vitamin D supplementation without concomitant calcium slows or stops the progression of osteomalacia to osteoporosis. [ 9 ] For older people with osteoporosis, taking vitamin D with calcium may help prevent hip fractures, but it also slightly increases the risk of stomach and kidney problems. [ 91 ] [ 92 ] The reduced risk for fractures is not seen in healthier, community-dwelling elderly. [ 10 ] [ 93 ] [ 94 ] Low serum vitamin D levels have been associated with falls , [ 95 ] but taking extra vitamin D does not appear to reduce that risk. [ 96 ]
Athletes who are vitamin D deficient are at an increased risk of stress fractures and/or major breaks, particularly those engaging in contact sports. Incremental decreases in risk are observed with rising serum 25(OH)D concentrations plateauing at 50 ng/mL with no additional benefits seen in levels beyond this point. [ 97 ]
While serum low 25-hydroxyvitamin D status has been associated with a higher risk of cancer in observational studies , [ 98 ] [ 99 ] [ 100 ] the general conclusion is that there is insufficient evidence for an effect of vitamin D supplementation on the risk of cancer, [ 2 ] [ 101 ] [ 102 ] although there is some evidence for reduction in cancer mortality. [ 98 ] [ 103 ]
Vitamin D supplementation is not associated with a reduced risk of stroke, cerebrovascular disease , myocardial infarction , or ischemic heart disease . [ 10 ] [ 104 ] [ 105 ] Supplementation does not lower blood pressure in the general population. [ 106 ] [ 107 ] [ 108 ] One meta-analysis found a small increase in risk of stroke when calcium and vitamin D supplements were taken together. [ 109 ]
Vitamin D receptors are found in cell types involved in immunity. Functions are not understood. Some autoimmune and infectious diseases are associated with vitamin D deficiency, but either there is no evidence that supplementation has a benefit or not, or for some, evidence indicating there are no benefits. [ 110 ] [ 111 ] [ 112 ] [ 113 ]
Low plasma vitamin D concentrations have been reported for autoimmune thyroid diseases , [ 114 ] lupus , [ 115 ] myasthenia gravis , [ 116 ] rheumatoid arthritis , [ 117 ] and multiple sclerosis . [ 118 ] For multiple sclerosis and rheumatoid arthritis, intervention trials using vitamin D supplementation did not demonstrate therapeutic effects. [ 111 ] [ 119 ]
Vitamin D supplementation does not reduce the risk of acute respiratory disease . [ 120 ] In general, vitamin D functions to activate the innate and dampen the adaptive immune systems with antibacterial, antiviral and anti-inflammatory effects. [ 121 ] [ 122 ] Low serum levels of vitamin D appear to be a risk factor for tuberculosis . [ 123 ] However, supplementation trials showed no benefit. [ 112 ] [ 113 ]
Vitamin D deficiency has been linked to the severity of inflammatory bowel disease (IBD). [ 124 ] However, whether vitamin D deficiency causes IBD or is a consequence of the disease is not clear. [ 125 ] Supplementation leads to improvements in scores for clinical inflammatory bowel disease activity and biochemical markers, [ 125 ] [ 126 ] and less frequent relapse of symptoms in IBD. [ 125 ]
Vitamin D supplementation does not help prevent asthma attacks or alleviate symptoms. [ 127 ]
In July 2020, the US National Institutes of Health stated "There is insufficient evidence to recommend for or against using vitamin D supplementation for the prevention or treatment of COVID-19." [ 128 ] Same year, the UK National Institute for Health and Care Excellence (NICE) position was to not recommend to offer a vitamin D supplement to people solely to prevent or treat COVID-19. [ 129 ] NICE updated its position in 2022 to "Do not use vitamin D to treat COVID-19 except as part of a clinical trial." [ 130 ] Both organizations included recommendations to continue the previously established recommendations on vitamin D supplementation for other reasons, such as bone and muscle health, as applicable. Both organizations noted that more people may require supplementation due to lower amounts of sun exposure during the pandemic. [ 128 ] [ 129 ]
Vitamin D deficiency and insufficiency have been associated with adverse outcomes in COVID-19. [ 131 ] [ 132 ] [ 133 ] [ 134 ] [ 135 ] Supplementation trials, mostly large, single, oral dose upon hospital admission, reported lower subsequent transfers to intensive care and to all-cause mortality. [ 136 ] [ 137 ] [ 138 ]
Vitamin D supplementation substantially reduced the rate of moderate or severe exacerbations of chronic obstructive pulmonary disease (COPD). [ 139 ]
A meta-analysis reported that vitamin D supplementation significantly reduced the risk of type 2 diabetes for non-obese people with prediabetes . [ 140 ] Another meta-analysis reported that vitamin D supplementation significantly improved glycemic control [homeostatic model assessment-insulin resistance (HOMA-IR)], hemoglobin A1C (HbA1C), and fasting blood glucose (FBG) in individuals with type 2 diabetes. [ 141 ] In prospective studies, high versus low levels of vitamin D were respectively associated with a significant decrease in risk of type 2 diabetes, combined type 2 diabetes and prediabetes, and prediabetes. [ 142 ] A systematic review included one clinical trial that showed vitamin D supplementation together with insulin maintained levels of fasting C-peptide after 12 months better than insulin alone. [ 143 ]
A meta-analysis of observational studies showed that children with ADHD have lower vitamin D levels and that there was a small association between low vitamin D levels at the time of birth and later development of ADHD. [ 144 ] Several small, randomized controlled trials of vitamin D supplementation indicated improved ADHD symptoms such as impulsivity and hyperactivity. [ 145 ]
Clinical trials of vitamin D supplementation for depressive symptoms have generally been of low quality and show no overall effect, although subgroup analysis showed supplementation for participants with clinically significant depressive symptoms or depressive disorder had a moderate effect. [ 146 ]
A systematic review of clinical studies found an association between low vitamin D levels with cognitive impairment and a higher risk of developing Alzheimer's disease . However, lower vitamin D concentrations are also associated with poor nutrition and spending less time outdoors. Therefore, alternative explanations for the increase in cognitive impairment exist and hence a direct causal relationship between vitamin D levels and cognition could not be established. [ 147 ]
People diagnosed with schizophrenia tend to have lower serum vitamin D concentrations compared to those without the condition. This may be a consequence of the disease rather than a cause, due, for example, to low dietary vitamin D and less time spent exposed to sunlight. [ 148 ] [ 149 ] Results from supplementation trials have been inconclusive. [ 148 ]
Erectile dysfunction can be a consequence of vitamin D deficiency. Mechanisms may include the regulation of vascular stiffness, the production of vasodilating nitric oxide , and the regulation of vessel permeability. However, the clinical trial literature does not yet contain sufficient evidence that supplementation treats the problem. Part of the complexity is that vitamin D deficiency is also linked to morbidities that are associated with erectile dysfunction, such as obesity, hypertension, diabetes mellitus, hypercholesterolemia, chronic kidney disease and hypogonadism. [ 150 ] [ 151 ]
In women, vitamin D receptors are expressed in the superficial layers of the urogenital organs. There is an association between vitamin D deficiency and a decline in sexual functions, including sexual desire, orgasm, and satisfaction in women, with symptom severity correlated with vitamin D serum concentration. The clinical trial literature does not yet contain sufficient evidence that supplementation reverses these dysfunctions or improves other aspects of vaginal or urogenital health. [ 152 ]
Pregnant women often do not take the recommended amount of vitamin D. [ 153 ] Low levels of vitamin D in pregnancy are associated with gestational diabetes , pre-eclampsia , and small for gestational age infants. [ 154 ] Although taking vitamin D supplements during pregnancy raises blood levels of vitamin D in the mother at term, the full extent of benefits for the mother or baby is unclear. [ 154 ] [ 155 ] [ 156 ]
Obesity increases the risk of having low serum vitamin D. Supplementation does not lead to weight loss, but weight loss increases serum vitamin D. The theory is that fatty tissue sequesters vitamin D. [ 41 ] Bariatric surgery as a treatment for obesity can lead to vitamin deficiencies. Long-term follow-up reported deficiencies for vitamins D, E, A, K and B12, with D the most common at 36%. [ 42 ]
There is evidence that the pathogenesis of uterine fibroids is associated with low serum vitamin D and that supplementation reduces the size of fibroids. [ 157 ] [ 158 ]
Governmental regulatory agencies stipulate for the food and dietary supplement industries certain health claims as allowable as statements on packaging.
Europe: European Food Safety Authority (EFSA)
US: Food and Drug Administration (FDA)
Canada: Health Canada
Japan: Foods with Nutrient Function Claims (FNFC)
Various government institutions have proposed different recommendations for the amount of daily intake of vitamin D. These vary according to age, pregnancy, or lactation, and the extent assumptions are made regarding skin synthesis. [ 2 ] [ 56 ] [ 57 ] [ 58 ] [ 163 ] Older recommendations were lower. For example, the US Adequate Intake recommendations from 1997 were 200 IU/day for infants, children, adults to age 50, and women during pregnancy or lactation, 400 IU/day for ages 51–70, and 600 IU/day for 71 and older. [ 165 ]
Conversion: 1 μg (microgram) = 40 IU (international unit). [ 56 ] For dietary recommendation and food labeling purposes government agencies consider vitamin D 3 and D 2 bioequivalent. [ 5 ] [ 56 ] [ 57 ] [ 58 ] [ 163 ]
The UK National Health Service (NHS) recommends that people at risk of vitamin D deficiency, breast-fed babies, formula-fed babies taking less than 500 ml/day, and children aged 6 months to 4 years, should take daily vitamin D supplements throughout the year to ensure sufficient intake. [ 56 ] This includes people with limited skin synthesis of vitamin D, who are not often outdoors, are frail, housebound, living in a care home, or usually wearing clothes that cover up most of the skin, or with dark skin, such as having an African, African-Caribbean or south Asian background. Other people may be able to make adequate vitamin D from sunlight exposure from April to September. The NHS and Public Health England recommend that everyone, including those who are pregnant and breastfeeding, consider taking a daily supplement containing 10 μg (400 IU) of vitamin D during autumn and winter because of inadequate sunlight for vitamin D synthesis. [ 166 ]
The dietary reference intake for vitamin D issued in 2011 by the Institute of Medicine (IoM) (renamed National Academy of Medicine in 2015), superseded previous recommendations which were expressed in terms of adequate intake. The recommendations were formed assuming the individual has no skin synthesis of vitamin D because of inadequate sun exposure. The reference intake for vitamin D refers to total intake from food, beverages, and supplements, and assumes that calcium requirements are being met. [ 5 ] : 362–394 The tolerable upper intake level (UL) is defined as "the highest average daily intake of a nutrient that is likely to pose no risk of adverse health effects for nearly all persons in the general population". [ 5 ] : 424–446 Although ULs are believed to be safe, information on the long-term effects is incomplete and these levels of intake are not recommended for long-term consumption. [ 5 ] : 404 : 439–440
For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For vitamin D labeling purposes, 100% of the daily value was 400 IU (10 μg), but in May 2016, it was revised to 800 IU (20 μg) to bring it into agreement with the recommended dietary allowance (RDA). [ 167 ] [ 168 ] A table of the old and new adult daily values is provided at Reference Daily Intake .
Health Canada published recommended dietary intakes (DRIs) and tolerable upper intake levels (ULs) for vitamin D. [ 57 ]
Australia and New Zealand published nutrient reference values including guidelines for dietary vitamin D intake in 2006. [ 163 ] About a third of Australians have vitamin D deficiency. [ 169 ] [ 170 ]
The European Food Safety Authority (EFSA) in 2016 [ 58 ] reviewed the current evidence, finding the relationship between serum 25(OH)D concentration and musculoskeletal health outcomes is widely variable. They considered that average requirements and population reference intake values for vitamin D cannot be derived and that a serum 25(OH)D concentration of 50 nmol/L was a suitable target value. For all people over the age of 1, including women who are pregnant or lactating, they set an adequate intake of 15 μg/day (600 IU). [ 58 ]
On the other hand, the EU Commission defined nutrition labelling for foodstuffs as regards recommended daily allowances (RDA) for vitamin D to 5 μg/day (200 IU) as 100%. [ 171 ]
The EFSA reviewed safe levels of intake in 2012, [ 164 ] setting the tolerable upper limit for adults at 100 μg/day (4000 IU), a similar conclusion as the IOM.
The Swedish National Food Agency recommends a daily intake of 10 μg (400 IU) of vitamin D 3 for children and adults up to 75 years, and 20 μg (800 IU) for adults 75 and older. [ 172 ]
Non-government organisations in Europe have made their own recommendations. The German Society for Nutrition recommends 20 μg. [ 173 ] The European Menopause and Andropause Society recommends postmenopausal women consume 15 μg (600 IU) until age 70, and 20 μg (800 IU) from age 71. This dose should be increased to 100 μg (4,000 IU) in some patients with very low vitamin D status or in case of co-morbid conditions. [ 174 ]
Few foods naturally contain vitamin D. Cod liver oil as a dietary supplement contains 450 IU/teaspoon. Fatty fish (but not lean fish such as tuna) are the best natural food sources of vitamin D 3 . Beef liver, eggs, and cheese have modest amounts. Mushrooms provide variable amounts of vitamin D 2 , as mushrooms can be treated with UV light to greatly increase their content. [ 46 ] [ 175 ] In certain countries, breakfast cereals, dairy milk and plant milk products are fortified. Infant formulas are fortified with 400 to 1000 IU per liter, [ 2 ] [ 176 ] a normal volume for a full-term infant after the first month. [ 60 ] Cooking only minimally decreases vitamin content. [ 176 ]
In the early 1930s, the United States and countries in northern Europe began to fortify milk with vitamin D in an effort to eradicate rickets. This, plus medical advice to expose infants to sunlight, effectively ended the high prevalence of rickets. The proven health benefit of vitamin D led to fortification to many foods, even foods as inappropriate as hot dogs and beer. In the 1950s, due to some highly publicized cases of hypercalcemia and birth defects, vitamin D fortification became regulated, and in some countries discontinued. [ 33 ] As of 2024, governments have established mandated or voluntary food fortification programs to combat deficiency in, respectively, 15 and 10 countries. [ 38 ] Depending on the country, [ 38 ] manufactured foods fortified with either vitamin D 2 or D 3 may include dairy milk and other dairy foods, fruit juices and fruit juice drinks, meal replacement food bars, soy protein -based beverages, wheat flour or corn meal products, infant formulas , breakfast cereals and ' plant milks ', [ 39 ] [ 177 ] [ 23 ] the last described as beverages made from soy, almond, rice, oats and other plant sources intended as alternatives to dairy milk. [ 178 ]
Synthesis of vitamin D in nature is dependent on the presence of UV radiation and subsequent activation in the liver and the kidneys. Many animals synthesize vitamin D 3 from 7-dehydrocholesterol , and many fungi synthesize vitamin D 2 from ergosterol . [ 46 ]
Vitamin D 3 is produced photochemically from 7-dehydrocholesterol in the skin of most vertebrate animals, including humans. [ 179 ] The skin consists of two primary layers: the inner layer called the dermis , and the outer, thinner epidermis . Vitamin D is produced in the keratinocytes of two innermost strata of the epidermis, the stratum basale and stratum spinosum , which also can produce calcitriol and express the vitamin D receptor. [ 180 ] The 7-dehydrocholesterol reacts with UVB light at wavelengths of 290–315 nm. These wavelengths are present in sunlight, as well as in the light emitted by the UV lamps in tanning beds (which produce ultraviolet primarily in the UVA spectrum, but typically produce 4% to 10% of the total UV emissions as UVB). Exposure to light through windows is insufficient because glass almost completely blocks UVB light. [ 181 ] In skin, either permanently in dark skin or temporarily due to tanning, melanin is located in the stratum basale, where it blocks UVB light and thus inhibits vitamin D synthesis. [ 48 ]
The transformation in the skin that converts 7-dehydrocholesterol to vitamin D 3 occurs in two steps. First, 7-dehydrocholesterol is photolyzed by ultraviolet light in a 6-electron conrotatory ring-opening electrocyclic reaction ; the product is previtamin D 3 . Second, previtamin D 3 spontaneously isomerizes to vitamin D 3 ( cholecalciferol ) via a [1,7]- sigmatropic hydrogen shift . In fungi, the conversion from ergosterol to vitamin D 2 follows a similar procedure, forming previtamin D 2 by UVB photolysis, which isomerizes to vitamin D 2 ( ergocalciferol ). [ 4 ]
Click on View at the bottom to open.
Click on genes, proteins and metabolites below to link to respective articles. [ § 1 ]
For at least 1.2 billion years, eukaryotes - a classification of life forms that includes single-cell species, fungi, plants, and animals, but not bacteria - have been able to synthesize 7-dehydrocholesterol. When this molecule is exposed to UVB light from the sun it absorbs the energy in the process of being converted to vitamin D. The function was to prevent DNA damage, the converted molecule being an end product without vitamin function. Present day, phytoplankton in the ocean photosynthesize vitamin D without any calcium management function. Ditto some species of algae, lichen, fungi, and plants. [ 182 ] [ 183 ] [ 184 ] Only circa 500 million years ago, when animals began to leave the oceans for land, did the UV-converted molecule take on a hormone function as a promoter of calcium regulation. This function required the development of a nuclear vitamin D receptor (VDR) that binds the biologically active vitamin D metabolite 1α,25-dihydroxyvitamin (D 3 ), plasma transport proteins, and vitamin D metabolizing CYP450 enzymes regulated by calciotropic hormones. The triumvirate of receptor protein, transport and metabolizing enzymes are found only in vertebrates . [ 185 ] [ 186 ] [ 187 ]
The initial vitamin function evolved for control of metabolic genes supporting innate and adaptive immunity. Only later did the VDR system start to function as an important regulator of calcium supply for a calcified skeleton in land-based vertebrates. From amphibians onward, bone management is biodynamic, with bone functioning as internal calcium reservoir under the control of osteoclasts via the combined action of parathyroid hormone and 1α,25-dihydroxyvitamin D 3 . [ 185 ] [ 186 ] [ 187 ]
Most land-based vertebrates - mammals, reptiles, birds, and amphibians - produce vitamin D in response to ultraviolet light. Carnivores and omnivores also get the vitamins from their diets, and herbivores can get some vitamins from fungi that are consumed along with plant foods. [ 188 ] In the wild, reptiles require either exposure to sunlight or consumption of prey, or both. In captivity, artificial lighting that provides UVB light is preferred to fortified food. [ 189 ] The same holds true for birds [ 190 ] and amphibians. [ 191 ] There are some exceptions. Feline species and dogs are practically incapable of vitamin D synthesis due to the high activity of 7-dehydrocholesterol reductase , which converts any 7-dehydrocholesterol in the skin to cholesterol before it can be UVB light-modified, but instead get vitamin D from diet. [ 192 ] [ 193 ]
Fish do not synthesize vitamin D from exposure to ultraviolet light. Wild-caught fish obtain vitamin D via a diet of phytoplankton, zooplankton, and the aquatic food chain. Commercially raised fish are fed D 3 fortified diets. As with land-based vertebrates, the vitamin is transported by vitamin D binding protein to cellular receptors. Aquaculture research shows that the vitamin is needed for bone health, optimizing growth, reducing fatty liver problems and supporting immune health. Unlike land-based vertebrates, large amounts of vitamin D 3 are stored in the liver and fatty tissues, making fish a good dietary source for human consumption. [ 194 ]
During the long period between one and three million years ago, hominids , including ancestors of homo sapien s , underwent several evolutionary changes. A long-term climate shift toward drier conditions promoted life changes from sedentary forest-dwelling with a primarily plant-based diet toward upright walking/running on open terrain and more meat consumption. [ 195 ] One consequence of the shift to a culture that included more physically active hunting was a need for evaporative cooling from sweat, which to be functional, meant an evolutionary shift toward less body hair, as evaporation from sweat-wet hair would have cooled the hair but not the skin underneath. [ 196 ] A second consequence was darker skin. [ 195 ] The early humans who evolved in the regions of the globe near the equator had permanent large quantities of the skin pigment melanin in their skins, resulting in brown/black skin tones. For people with light skin tone, exposure to UV radiation induces the synthesis of melanin causing the skin to darken, i.e., sun tanning . Either way, the pigment can protect by dissipating up to 99.9% of absorbed UV radiation. [ 48 ] In this way, melanin protects skin cells from UVA and UVB radiation damage that causes photoaging and the risk of malignant melanoma , a cancer of melanin cells. [ 197 ] Melanin also protects against photodegradation of the vitamin folate in skin tissue, and in the eyes, preserves eye health. [ 195 ]
The dark-skinned humans who had evolved in Africa populated the rest of the world through migration some 50,000 to 80,000 years ago. [ 198 ] Following settlement in northward regions of Asia and Europe which seasonally get less sunlight, the selective pressure for radiation-protective skin tone decreased while a need for efficient vitamin D synthesis in skin increased, resulting in low-melanin, lighter skin tones in the rest of the prehistoric world. [ 187 ] [ 186 ] [ 195 ] For people with low skin melanin, moderate sun exposure to the face, arms and lower legs several times a week is sufficient. [ 199 ] However, for recent cultural changes such as indoor living and working, UV-blocking skin products to reduce the risk of sunburn and emigration of dark-skinned people to countries far from the equator have all contributed to an increased incidence of vitamin D insufficiency and deficiency that need to be addressed by food fortification and vitamin D dietary supplements. [ 195 ]
Vitamin D 3 (cholecalciferol) is produced industrially by exposing 7-dehydrocholesterol to UVB and UVC light, followed by purification. The 7-dehydrocholesterol is sourced as an extraction from lanolin , a waxy skin secretion in sheep's wool. [ 200 ] Vitamin D 2 (ergocalciferol) is produced in a similar way using ergosterol from yeast as a starting material. [ 200 ] [ 201 ]
Whether synthesized in the skin or ingested, vitamin D is hydroxylated in the liver at position 25 (upper right of the molecule) to form the prohormone calcifediol, also referred to as 25(OH)D). [ 3 ] This reaction is catalyzed by the microsomal enzyme vitamin D 25-hydroxylase , the product of the CYP2R1 human gene. [ 202 ] Once made, the product is released into the blood where it is bound to vitamin D-binding protein . [ 203 ]
Calcifediol is transported to the proximal tubules of the kidneys, where it is hydroxylated at the 1-α position (lower right of the molecule) to form calcitriol (1,25-dihydroxycholecalciferol, also referred to as 1,25(OH) 2 D). [ 1 ] The conversion of calcifediol to calcitriol is catalyzed by the enzyme 25-hydroxyvitamin D 3 1-alpha-hydroxylase , which is the product of the CYP27B1 human gene. The activity of CYP27B1 is increased by parathyroid hormone and also by low plasma calcium or phosphate. [ 1 ] Following the final converting step in the kidney, calcitriol is released into the circulation. By binding to vitamin D-binding protein, calcitriol is transported throughout the body. [ 13 ] In addition to the kidneys, calcitriol is also synthesized by certain other cells, including monocyte - macrophages in the immune system . When synthesized by monocyte-macrophages, calcitriol acts locally as a cytokine , modulating body defenses against microbial invaders by stimulating the innate immune system . [ 204 ]
The bioactivity of calcitriol is terminated by hydroxylation at position 24 by vitamin D3 24-hydroxylase , coded for by gene CYP24A1 , forming calcitetrol. [ 3 ] Further metabolism yields calcitroic acid, an inactive water-soluble compound that is excreted in bile. [ 64 ]
Vitamin D 2 (ergocalciferol) and vitamin D 3 (cholecalciferol) share a similar but not identical mechanism of action. [ 3 ] Metabolites produced by vitamin D 2 are named with an er- or ergo- prefix to differentiate them from the D 3 -based counterparts (sometimes with a chole- prefix). [ 12 ]
Calcitriol exerts its effects primarily by binding to the vitamin D receptor (VDR), which leads to the upregulation of gene transcription . [ 206 ] In the absence of calcitriol, the VDR is mainly located in the cytoplasm of cells. Calcitriol enters cells and binds to the VDR which forms a complex with its coreceptor RXR and the activated VDR/RXR complex is translocated into the nucleus . [ 204 ] The VDR/RXR complex subsequently binds to vitamin D response elements (VDRE) which are specific DNA sequences adjacent to genes, numbers estimated as being in the thousands. The VDR/RXR/DNA complex recruits other proteins that transcribe the downstream gene into mRNA which in turn is translated into protein causing a change in cell function. [ 3 ] [ 50 ]
In addition to calcitriol, other vitamin D metabolites may contribute to vitamin D's biological effects. For example, CYP11A1 , an enzyme chiefly known for its role in steroidogenesis , has been found to hydroxylate vitamin D3 at several positions, including C-20, C-22, and C-23, without cleaving the side chain. The resulting metabolites, such as 20-hydroxyvitamin D3 and 20,23-dihydroxyvitamin D3, act as inverse agonists for RORα and RORγ2 . This interaction leads to effects such as the downregulation of IL-17 signaling, which influences the immune system. [ 207 ] Finally, some effects of vitamin D occur too rapidly to be explained by its influence on gene transcription. For example, calcitriol triggers rapid calcium uptake (within 1-10 minutes) in a variety of cells. These non-genomic actions may involve membrane-bound receptors like PDIA3 . [ 208 ] [ 209 ] [ 210 ]
Genes regulated by the vitamin D receptor influence a wide range of physiological processes beyond calcium homeostasis and bone metabolism. They play a significant role in immune function , cellular signaling , and even blood coagulation , demonstrating the broad impact of vitamin D-regulated genes on human physiology. [ 211 ] Examples of these genes are outlined below.
Vitamin D receptor-regulated genes involved in vitamin D metabolism are CYP27B1 , which encodes the enzyme that produces active vitamin D. [ 212 ] [ 213 ] and CYP24A1 , which encodes the enzyme responsible for degrading active vitamin D, [ 212 ] [ 213 ] In the area of calcium homeostasis and bone metabolism , several genes are regulated by vitamin D. These include TNFSF11 (RANKL), crucial for bone metabolism; [ 212 ] [ 214 ] SPP1 (Osteopontin), which is important for bone metabolism; [ 212 ] [ 214 ] and BGLAP (Osteocalcin), which is involved in bone mineralization. [ 212 ] [ 214 ] Additional genes include TRPV6 , a calcium channel critical for intestinal calcium absorption; [ 211 ] S100G (Calbindin-D9k), a calcium-binding protein that facilitates calcium translocation in enterocytes ; [ 211 ] ATP2B1 (PMCA1b), a plasma membrane calcium ATPase involved in calcium extrusion from the cell; [ 211 ] and the S100A family of genes, which encode calcium-binding proteins involved in various cellular processes. [ 212 ]
Vitamin D also plays a role in immune function , influencing genes such as CAMP (Cathelicidin Antimicrobial Peptide), which is involved in innate immune responses; [ 212 ] [ 211 ] CD14 , which participates in innate immune responses; [ 212 ] and HLA class II genes, which are important for adaptive immune function. [ 212 ] [ 211 ] Cytokines such as IL2 and IL12 , crucial for T cell responses, are also regulated by vitamin D. [ 215 ] In the domain of blood coagulation, vitamin D regulates the expression of THBD (Thrombomodulin), a key gene involved in the coagulation process. [ 212 ] Vitamin D also affects genes involved in cell differentiation and proliferation, including p21 and p27 , which regulate the cell cycle , [ 216 ] as well as transcription factors such as c-fos and c-myc , which are involved in cell proliferation . [ 216 ]
Calcitriol plays a key role in regulating vitamin D levels through a negative feedback mechanism. [ 205 ] It strongly upregulates the expression of the enzyme CYP24A1 , which inactivates vitamin D. This activation happens through binding of the activated vitamin D receptor (VDR) to two vitamin D response elements (VDREs) in the CYP24A1 gene. VDR also recruits proteins like histone acetyltransferases and RNA polymerase II to enhance this process. At the same time, calcitriol suppresses the production of CYP27B1 , another enzyme involved in vitamin D metabolism, by modifying its gene's promoter region through an epigenetic mechanism. Together, these actions help tightly control vitamin D levels in the kidney. [ 205 ]
Vitamin D metabolism is regulated not only by the negative feedback mechanism of calcitriol but also by two hormones: parathyroid hormone (PTH) and fibroblast growth factor-23 (FGF-23). These hormones are essential for maintaining the body's calcium and phosphate balance. [ 205 ]
Parathyroid hormone (PTH) regulates serum calcium through its effects on bone, kidneys, and the small intestine. Bone remodeling , a constant process throughout life, involves bone mineral content being released by osteoclasts ( bone resorption ) and deposited by osteoblasts . PTH enhances the release of calcium from the large reservoir contained in the bones. It accomplishes this by binding to osteoblasts, in this way inhibiting the cells responsible for adding mineral content to bones, thus favoring the actions of osteoclasts. [ 218 ] In the kidneys, around 250 mmol of calcium ions are filtered into the glomerular filtrate per day, with the great majority reabsorbed and the remainder excreted in the urine. [ 219 ] PTH inhibits reabsorption of phosphate (HPO 4 2− ) by the kidneys, resulting in a decrease in plasma phosphate concentration. Given that phosphate ions form water-insoluble salts with calcium, a decrease in the phosphate concentration in plasma (for a given total calcium concentration) increases the amount of ionized (free) calcium. [ 218 ] A third important effect of PTH on the kidneys is stimulation of the conversion of 25-hydroxy vitamin D into 1,25-dihydroxy vitamin D (calcitriol). [ 218 ] This form of vitamin D is the active hormone which promotes calcium uptake from the intestine via the action of calbindin . [ 220 ] Calcitriol also reduces calcium loss to urine. [ 217 ]
Per the diagram, calcitriol suppresses the parathyroid hormone gene, thus creating a negative feedback loop that combines to tightly maintain plasma calcium in a normal range of 2.1-2.6 mmol/L for total calcium and 1.1-1.3 mmol/L for ionized calcium . [ 211 ] However, there are also vitamin D receptors in bone cells, so that with serum vitamin D in great excess, osteoclastic bone resorption is promoted regardless of PTH, resulting in hypercalcemia and its symptomology. [ 221 ]
In northern European countries, cod liver oil had a long history of folklore medical uses, including applied to the skin and taken orally as a treatment for rheumatism and gout . [ 222 ] There were several extraction processes. Fresh livers cut to pieces and suspended on screens over pans of boiling water would drip oil that could be skimmed off the water, yielding a pale oil with a mild fish odor and flavor. For industrial purposes such as a lubricant, cod livers were placed in barrels to rot, with the oil skimmed off over months. The resulting oil was light to dark brown, and exceedingly foul smelling and tasting. In the 1800s, cod liver oil became popular as a bottled medicinal product for oral consumption - a teaspoon a day - with both pale and brown oils being used. The trigger for the surge in oral use was the observation made in several European countries starting with Holland in the 1820s and spreading to other countries into the 1860s that young children fed cod liver oil did not develop rickets . [ 222 ] In northern Europe and the United States, the practice of giving children cod liver oil to prevent rickets persisted well in the 1950s. This overlapped with the fortification of cow's milk with vitamin D, which began in the early 1930s. [ 222 ]
Knowledge of cod liver oil being rickets-preventive in humans carried over to treating animals. In 1899, London surgeon John Bland-Sutton was asked to investigate why litters of lion cubs at the London Zoo were dying with a presentation that included rickets. He recommended that the diets of the pregnant and nursing females and the weaned cubs be switched from lean horse meat to goat - including calcium- and phosphorus-containing bones - and cod liver oil, solving the problem. Subsequently, researchers realized that animal models such as dogs and rats could be used for rickets research, [ 223 ] leading to the identification and naming of the responsible vitamin in 1922. [ 224 ]
In 1914, American researchers Elmer McCollum and Marguerite Davis had discovered a substance in cod liver oil which later was named " vitamin A ". [ 6 ] Edward Mellanby , a British researcher, observed that dogs that were fed cod liver oil did not develop rickets, and (wrongly) concluded that vitamin A could prevent the disease. In 1922, McCollum tested modified cod liver oil in which the vitamin A had been destroyed. The modified oil cured the sick dogs, so McCollum concluded the factor in cod liver oil which cured rickets was distinct from vitamin A. He called it vitamin D because it was the fourth vitamin to be named. [ 6 ] [ 225 ] [ 226 ]
In 1925, it was established that when 7-dehydrocholesterol is irradiated with light, a form of a fat-soluble substance is produced, now known as vitamin D 3 . [ 6 ] [ 7 ] Adolf Windaus , at the University of Göttingen in Germany, received the Nobel Prize in Chemistry in 1928 "...for the services rendered through his research into the constitution of the sterols and their connection with the vitamins." [ 8 ] Alfred Fabian Hess , his research associate, stated: "Light equals vitamin D." [ 227 ] In 1932, Otto Rosenheim and Harold King published a paper putting forward structures for sterols and bile acids, [ 228 ] and soon thereafter collaborated with Kenneth Callow and others on the isolation and characterization of vitamin D. [ 229 ] Windaus further clarified the chemical structure of vitamin D. [ 230 ]
In 1969, a specific binding protein for vitamin D called the vitamin D receptor was identified. [ 231 ] Shortly thereafter, the conversion of vitamin D to calcifediol and then to calcitriol, the biologically active form, was confirmed. [ 232 ] The photosynthesis of vitamin D 3 in skin via previtamin D 3 and its subsequent metabolism was described in 1980. [ 233 ]
|
https://en.wikipedia.org/wiki/Vitamin_D
|
The insect vitelline envelope is the outer proteinaceous layer outside the oocyte and egg . The vitelline envelope, not being a cellular structure, is commonly referred to as a membrane . However, this is a technical misnomer as the structure is composed of protein and is not a cellular component. It varies in thickness between different insects and even varies at different parts of the egg. It lies inside the outer shell of the egg, which is commonly referred to as the chorion . [ 1 ]
The presence of the vitelline membrane defines the embryo's boundaries. It is a critical structural element required to resist the forces of morphogenesis and the mechanical pressures experienced during egg-laying. [ 2 ]
Before egg activation, the vitelline membrane is permeable to water, ions, and small molecules. Egg activation is stimulated by mechanical deformation associated with traversing through the narrow channel in the oviduct and requires the presence of Ca2+. [ 3 ] During egg activation, the vitelline membrane proteins are crosslinked via disulfide remodeling; the structure rigidifies and becomes impermeable to water but remains gas permeable. [ 4 ] This process is hypothesized to have been selected to prevent polyspermy. [ 5 ] The vitelline membrane is composed primarily of four glycoproteins, collectively referred to as vitelline membrane proteins (VMPs). This class of proteins contains a conserved "VM domain": (CX7CX8C). VMPs are secreted during stages 9–10 of oogenesis and accumulate as vitelline bodies in the extracellular space; these bodies fuse to form a continuous layer at the end of stage 10. This layer thins as the oocyte grows to reach a final thickness of ~0.4 um.
Upon egg activation, peroxidase-mediated crosslinking occurs in the vitelline membrane resulting in a disulfide-linked network. [ 6 ] After crosslinking, the envelope is impermeable to additional sperm, water, and other large molecules but remains permeable to gas exchange. Spatial information and developmental patterning are encoded on the surface of the vitelline membrane. For example, in D. melanogaster, the dorsal-ventral body axis is determined by ventrally sulfated eggshell proteins that recruit and activated the Spätzle ligand within the perivitelline space, which, in turn, activate the Toll receptor upstream of morphogens such as Dorsal and Twist. [ 7 ]
|
https://en.wikipedia.org/wiki/Vitelline_envelope
|
This page provides supplementary chemical data on vitexin .
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet ( MSDS ) for this chemical from a reliable source such as eChemPortal , and follow its directions.
δ 3.53 (1H, m, H-5′′), 3.54 (1H, m, H-3″), 3.57 (1H, m, H-4″), 3.74 (1H, dd, J = 12.3, 5.5 Hz, H-6a″), 3.77 (1H, dd, J = 12.3, 2.0 Hz, H-6b″), 4.11 (1H, t, J = 9.0 Hz, H-2″), 4.85 (1H, d, J = 9.9 Hz, H-1″) 6.44 (1H, s, H-8), 6.60 (1H, s, H-3), 6.95 (2H, d, J = 8.6 Hz, H-3′, 5′), 7.85 (2H, d, J = 8.6 Hz, H-2′, 6′)
δ 61.7 (C-6″), 70.6 (C-4″), 72.0 (C-2″), 74.5 (C-1″), 79.2 (C-3″), 81.6 (C-5″), 95.3 (C-8), 103.4 (C-3), 104.3 (C-10), 108.7 (C-6), 116.7 (C-3′, 5′), 122.4 (C-1′), 129.0 (C-2′, 6′), 157.7 (C-9), 161.0 (C-5), 162.0(C- 4′), 164.5 (C-7), 165.0 (C-2), 183.1 (C-4)
|
https://en.wikipedia.org/wiki/Vitexin_(data_page)
|
Vitreous enamel , also called porcelain enamel , is a material made by fusing powdered glass to a substrate by firing, usually between 750 and 850 °C (1,380 and 1,560 °F). The powder melts, flows, and then hardens to a smooth, durable vitreous coating. The word vitreous comes from the Latin vitreus , meaning "glassy".
Enamel can be used on metal , glass , ceramics , stone, or any material that will withstand the fusing temperature. In technical terms fired enamelware is an integrated layered composite of glass and another material (or more glass). The term "enamel" is most often restricted to work on metal, which is the subject of this article. Essentially the same technique used with other bases is known by different terms: on glass as enamelled glass , or "painted glass", and on pottery it is called overglaze decoration , "overglaze enamels" or "enamelling". The craft is called " enamelling ", the artists "enamellers" and the objects produced can be called "enamels".
Enamelling is an old and widely adopted technology, for most of its history mainly used in jewellery and decorative art . Since the 18th century, enamels have also been applied to many metal consumer objects, such as some cooking vessels , steel sinks, and cast-iron bathtubs. It has also been used on some appliances , such as dishwashers , laundry machines , and refrigerators , and on marker boards and signage .
The term "enamel" has also sometimes been applied to industrial materials other than vitreous enamel, such as enamel paint and the polymers coating enameled wire ; these actually are very different in materials science terms.
The word enamel comes from the Old High German word smelzan (to smelt ) via the Old French esmail , [ 1 ] or from a Latin word smaltum , first found in a 9th-century Life of Leo IV . [ 2 ] Used as a noun, "an enamel" is usually a small decorative object coated with enamel. "Enamelled" and "enamelling" are the preferred spellings in British English , while "enameled" and "enameling" are preferred in American English .
The earliest enamel all used the cloisonné technique, placing the enamel within small cells with gold walls. This had been used as a technique to hold pieces of stone and gems tightly in place since the 3rd millennium BC, for example in Mesopotamia , and then Egypt. Enamel seems likely to have developed as a cheaper method of achieving similar results. [ 3 ]
The earliest undisputed objects known to use enamel are a group of Mycenaean rings from Cyprus , dated to the 13th century BC. [ 3 ] Although Egyptian pieces, including jewellery from the Tomb of Tutankhamun of c. 1325 BC, are frequently described as using "enamel", many scholars doubt the glass paste was sufficiently melted to be properly so described, and use terms such as "glass-paste". It seems possible that in Egyptian conditions the melting point of the glass and gold were too close to make enamel a viable technique. Nonetheless, there appear to be a few actual examples of enamel, perhaps from the Third Intermediate Period of Egypt (beginning 1070 BC) on. [ 4 ] But it remained rare in both Egypt and Greece.
The technique appears in the Koban culture of the northern and central Caucasus , and was perhaps carried by the Sarmatians to the ancient Celts. [ 3 ] Red enamel is used in 26 places on the Battersea Shield (c.350–50 BC), probably as an imitation of the red Mediterranean coral , which is used on the Witham Shield (400–300 BC). Pliny the Elder mentions the Celts' use of the technique on metal, which the Romans in his day hardly knew. The Staffordshire Moorlands Pan is a 2nd-century AD souvenir of Hadrian's Wall , made for the Roman military market, which has swirling enamel decoration in a Celtic style. In Britain, probably through preserved Celtic craft skills, enamel survived until the hanging bowls of early Anglo-Saxon art .
A problem that adds to the uncertainty over early enamel is artefacts (typically excavated) that appear to have been prepared for enamel, but have now lost whatever filled the cloisons or backing to a champlevé piece. [ 3 ] This occurs in several different regions, from ancient Egypt to Anglo-Saxon England. Once enamel becomes more common, as in medieval Europe after about 1000, the assumption that enamel was originally used becomes safer.
In European art history, enamel was at its most important in the Middle Ages , beginning with the Late Romans and then the Byzantine , who began to use cloisonné enamel in imitation of cloisonné inlays of precious stones. The Byzantine enamel style was widely adopted by the peoples of Migration Period northern Europe. The Byzantines then began to use cloisonné more freely to create images; this was also copied in Western Europe. In Kievan Rus a finift enamel technique was developed. [ 5 ] [ 6 ]
Mosan metalwork often included enamel plaques of the highest quality in reliquaries and other large works of goldsmithing . Limoges enamel was made in Limoges , France, the most famous centre of vitreous enamel production in Western Europe, though Spain also made a good deal. Limoges became famous for champlevé enamels from the 12th century onwards, producing on a large scale, and then (after a period of reduced production) from the 15th century retained its lead by switching to painted enamel on flat metal plaques. The champlevé technique was considerably easier and very widely practiced in the Romanesque period. In Gothic art the finest work is in basse-taille and ronde-bosse techniques, but cheaper champlevé works continued to be produced in large numbers for a wider market.
Painted enamel remained in fashion for over a century, and in France developed into a sophisticated Renaissance and the Mannerist style, seen on objects such as large display dishes, ewers, inkwells and in small portraits. After it fell from fashion it continued as a medium for portrait miniatures , spreading to England and other countries. This continued until the early 19th century.
A Russian school developed, which used the technique on other objects, as in the Renaissance, and for relatively cheap religious pieces such as crosses and small icons.
From either Byzantium or the Islamic world, the cloisonné technique reached China in the 13–14th centuries. The first written reference to cloisonné is in a book from 1388, where it is called "Dashi ('Muslim') ware". [ 7 ] No Chinese pieces that are clearly from the 14th century are known; the earliest datable pieces are from the reign of the Xuande Emperor (1425–1435), which, since they show a full use of Chinese styles, suggest considerable experience in the technique.
Cloisonné remained very popular in China until the 19th century and is still produced today. The most elaborate and most highly valued Chinese pieces are from the early Ming dynasty , especially the reigns of the Xuande Emperor and Jingtai Emperor (1450–1457), although 19th century or modern pieces are far more common. [ 7 ]
Japanese artists did not make three-dimensional enamelled objects until the 1830s but, once the technique took hold based on analysis of Chinese objects, it developed very rapidly, reaching a peak in the Meiji and Taishō eras (late 19th/early 20th century). [ 8 ] Enamel had been used as decoration for metalwork since about 1600, [ 9 ] [ 8 ] and Japanese cloisonné was already exported to Europe before the start of the Meiji era in 1868. [ 8 ] Cloisonné is known in Japan as shippo , literally "seven treasures". [ 10 ] This refers to richly coloured substances mentioned in Buddhist texts. [ 11 ] The term was initially used for colourful objects imported from China. According to legend, in the 1830s Kaji Tsunekichi broke open a Chinese enamel object to examine it, then trained many artists, starting off Japan's own enamel industry. [ 11 ] [ 9 ]
Early Japanese enamels were cloudy and opaque, with relatively clumsy shapes. This changed rapidly from 1870 onwards. [ 8 ] The Nagoya cloisonné company ( Nagoya shippo kaisha existed from 1871 to 1884, to sell the output of many small workshops and help them improve their work. [ 8 ] In 1874, the government created the Kiriu kosho kaisha company to sponsor the creation of a wide range of decorative arts at international exhibitions. This was part of a programme to promote Japan as a modern, industrial nation. [ 8 ]
Gottfried Wagener was a German scientist brought in by the government to advise Japanese industry and improve production processes. Along with Namikawa Yasuyuki he developed a transparent black enamel which was used for backgrounds. Translucent enamels in various other colours followed during this period. [ 8 ] Along with Tsukamoto Kaisuke , Wagener transformed the firing processes used by Japanese workshops, improving the quality of finishes and extending the variety of colours. [ 8 ] Kawade Shibatarō introduced a variety of techniques, including nagare-gusuri (drip-glaze) which produces a rainbow-coloured glaze and uchidashi ( repoussé ) technique, in which the metal foundation is hammered outwards to create a relief effect. [ 12 ] Together with Hattori Tadasaburō he developed the moriage ("piling up") technique which places layers of enamel upon each other to create a three-dimensional effect. [ 13 ] Namikawa Sōsuke developed a pictorial style that imitated paintings. He is known for shosen (minimised wires) and musen (wireless cloisonné): techniques developed with Wagener in which the wire cloisons are minimised or burned away completely with acid. [ 14 ] [ 9 ] This contrasts with the Chinese style which used thick metal cloisons . [ 8 ] Ando Jubei introduced the shōtai-jippō ( plique-à-jour ) technique which burns away the metal substrate to leave translucent enamel, producing an effect resembling stained glass . [ 15 ] The Ando Cloisonné Company which he co-founded is one of the few makers from this era still active. [ 8 ] Distinctively Japanese designs, in which flowers, birds and insects were used as themes, became popular. Designs also increasingly used areas of blank space. [ 9 ] With the greater subtlety these techniques allowed, Japanese enamels were regarded as unequalled in the world [ 16 ] and won many awards at national and international exhibitions. [ 14 ] [ 17 ]
Enamel was established in the Mughal Empire by around 1600 for decorating gold and silver objects, and became a distinctive feature of Mughal jewellery. The Mughal court was known to employ mīnākār (enamelers). [ 18 ] These craftsmen reached a peak of during the reign of Shah Jahan in the mid-17th century. Transparent enamels were popular during this time. [ 18 ] Both cloissoné and champlevé were produced in Mughal, with champlevé used for the finest pieces. [ 18 ] Modern industrial production began in Calcutta in 1921, with the Bengal Enamel Works Limited.
Enamel was used in Iran for colouring and ornamenting the surface of metals by fusing over it brilliant colours that are decorated in an intricate design called Meenakari . The French traveller Jean Chardin , who toured Iran during the Safavid period, made a reference to an enamel work of Isfahan , which comprised a pattern of birds and animals on a floral background in light blue, green, yellow and red. Gold has been used traditionally for Meenakari jewellery as it holds the enamel better, lasts longer and its lustre brings out the colours of the enamels. Silver , a later introduction, is used for artifacts like boxes, bowls, spoons, and art pieces. Copper began to be used for handicraft products after the Gold Control Act, was enforced in India which compelled the Meenakars to look for an alternative material. Initially, the work of Meenakari often went unnoticed as this art was traditionally used on the back of pieces of kundan or gem-studded jewellery, allowing pieces to be reversible. [ 19 ]
More recently, the bright, jewel-like colours have made enamel popular with jewellery designers, including the Art Nouveau jewellers, for designers of bibelots such as the eggs of Peter Carl Fabergé and the enameled copper boxes of the Battersea enamellers, [ 20 ] and for artists such as George Stubbs and other painters of portrait miniatures .
Enamel was first applied commercially to sheet iron and steel in Austria and Germany in about 1850. [ 21 ] : 5 Industrialization increased as the purity of raw materials increased and costs decreased. The wet application process started with the discovery of the use of clay to suspend frit in water. Developments that followed during the 20th century include enamelling-grade steel, cleaned-only surface preparation, automation, and ongoing improvements in efficiency, performance, and quality. [ 21 ] : 5
Between the World Wars, Cleveland in the United States became a center for enamel art, led by Kenneth F. Bates ; H. Edward Winter who had taught at the Cleveland School of Art wrote three books on the topic including Enamel Art on Metals . [ 22 ] In Australia , abstract artist Bernard Hesling brought the style into prominence with his variously sized steel plates, starting in 1957. [ 23 ] A resurgence in enamel-based art took place near the end of the 20th century in the Soviet Union , led by artists like Alexei Maximov and Leonid Efros . [ 24 ]
Vitreous enamel can be applied to most metals. Most modern industrial enamel is applied to steel in which the carbon content is controlled to prevent unwanted reactions at the firing temperatures. Enamel can also be applied to gold, silver, copper, aluminium , [ 25 ] stainless steel, [ 26 ] and cast iron . [ 27 ]
Vitreous enamel has many useful properties: it is smooth, hard, chemically resistant, durable, scratch resistant (5–6 on the Mohs scale ), has long-lasting colour fastness, is easy to clean, and cannot burn. Enamel is glass, not paint, so it does not fade under ultraviolet light . [ 28 ] A disadvantage of enamel is a tendency to crack or shatter when the substrate is stressed or bent, but modern enamels are relatively chip- and impact-resistant because of good thickness control and coefficients of thermal expansion well-matched to the metal. [ citation needed ]
The Buick automobile company was founded by David Dunbar Buick with wealth earned by his development of improved enamelling processes, c. 1887, for sheet steel and cast iron. Such enameled ferrous material had, and still has, many applications: early 20th century and some modern advertising signs, interior oven walls, cooking pots , housing and interior walls of major kitchen appliances , housing and drums of clothes washers and dryers, sinks and cast iron bathtubs , farm storage silos , and processing equipment such as chemical reactors and pharmaceutical process tanks. Structures such as filling stations , bus stations and Lustron Houses had walls, ceilings and structural elements made of enamelled steel. [ citation needed ]
One of the most widespread modern uses of enamel is in the production of quality chalk-boards and marker-boards (typically called 'blackboards' or 'whiteboards') where the resistance of enamel to wear and chemicals ensures that 'ghosting', or unerasable marks, do not occur, as happens with polymer boards. Since standard enamelling steel is magnetically attractive, it may also be used for magnet boards. Some new developments in the last ten years include enamel/non-stick hybrid coatings, sol-gel functional top-coats for enamels, enamels with a metallic appearance, and easy-to-clean enamels. [ 29 ]
The key ingredient of vitreous enamel is finely ground glass called frit . Frit for enamelling steel is typically an alkali borosilicate glass with a thermal expansion and glass temperature suitable for coating steel. Raw materials are smelted together between 2,100 and 2,650 °F (1,150 and 1,450 °C) into a liquid glass that is directed out of the furnace and thermal shocked with either water or steel rollers into frit. [ 21 ]
Colour in enamel is obtained by the addition of various minerals, often metal oxides cobalt , praseodymium , iron , or neodymium . The latter creates delicate shades ranging from pure violet through wine-red and warm grey. Enamel can be transparent, opaque or opalescent (translucent). Different enamel colours can be mixed to make a new colour, in the manner of paint.
There are various types of frit, which may be applied in sequence. A ground coat is applied first; it usually contains smelted-in transition metal oxides such as cobalt, nickel, copper, manganese, and iron that facilitate adhesion to the metal. Next, clear and semi-opaque frits that contain material for producing colours are applied.
The three main historical techniques for enamelling metal are:
Variants, and less common techniques are:
Other types:
See also Japanese shipōyaki techniques .
On sheet steel, a ground coat layer is applied to create adhesion. The only surface preparation required for modern ground coats is degreasing of the steel with a mildly alkaline solution. White and coloured second "cover" coats of enamel are applied over the fired ground coat. For electrostatic enamels, the coloured enamel powder can be applied directly over a thin unfired ground coat "base coat" layer that is co-fired with the cover coat in a very efficient two-coat/one-fire process.
The frit in the ground coat contains smelted-in cobalt and/or nickel oxide as well as other transition metal oxides to catalyse the enamel-steel bonding reactions. During firing of the enamel at between 760 and 895 °C (1,400 and 1,643 °F), iron oxide scale first forms on the steel. The molten enamel dissolves the iron oxide and precipitates cobalt and nickel . The iron acts as the anode in an electrogalvanic reaction in which the iron is again oxidised, dissolved by the glass, and oxidised again with the available cobalt and nickel limiting the reaction. Finally, the surface becomes roughened with the glass anchored into the holes. [ 40 ]
Enamel coatings applied to steel panels offer protection to the core material whether cladding road tunnels, underground stations, building superstructures or other applications. It can also be specified as a curtain walling. Qualities of this structural material include: [ 41 ]
|
https://en.wikipedia.org/wiki/Vitreous_enamel
|
Vitrification (from Latin vitrum ' glass ' , via French vitrifier ) is the full or partial transformation of a substance into a glass , [ 1 ] that is to say, a non- crystalline or amorphous solid . Glasses differ from liquids structurally and glasses possess a higher degree of connectivity with the same Hausdorff dimensionality of bonds as crystals: dim H = 3. [ 2 ] In the production of ceramics , vitrification is responsible for their impermeability to water . [ 3 ]
Vitrification is usually achieved by heating materials until they liquify, then cooling the liquid, often rapidly, so that it passes through the glass transition to form a glassy solid. Certain chemical reactions also result in glasses.
In terms of chemistry , vitrification is characteristic for amorphous materials or disordered systems and occurs when bonding between elementary particles ( atoms , molecules , forming blocks) becomes higher than a certain threshold value. [ 4 ] Thermal fluctuations break the bonds; therefore, the lower the temperature , the higher the degree of connectivity. Because of that, amorphous materials have a characteristic threshold temperature termed glass transition temperature ( T g ): below T g amorphous materials are glassy whereas above T g they are molten.
The most common applications are in the making of pottery , glass, and some types of food, but there are many others, such as the vitrification of an antifreeze-like liquid in cryopreservation .
In a different sense of the word, the embedding of material inside a glassy matrix is also called vitrification . An important application is the vitrification of radioactive waste to obtain a substance that is thought to be safer and more stable for disposal.
One study suggests [ 5 ] [ 6 ] [ 7 ] [ 8 ] that, during the eruption of Mount Vesuvius in 79 AD , a victim's brain was vitrified by the extreme heat of the volcanic ash ; however, this has been strenuously disputed. [ 9 ]
Vitrification is the progressive partial fusion of a clay , or of a body, as a result of a firing process . As vitrification proceeds, the proportion of glassy bond increases and the apparent porosity of the fired product becomes progressively lower. [ 3 ] [ 10 ] Vitreous bodies have open porosity, and may be either opaque or translucent . In this context, "zero porosity" may be defined as less than 1% water absorption. However, various standard procedures define the conditions of water absorption. [ 11 ] [ 12 ] [ 13 ] An example is by ASTM , who state "The term vitreous generally signifies less than 0.5% absorption, except for floor and wall tile and low-voltage electrical insulators , which are considered vitreous up to 3% water absorption." [ 14 ]
Pottery can be made impermeable to water by glazing or by vitrification. Porcelain , bone china , and sanitaryware are examples of vitrified pottery, and are impermeable even without glaze. Stoneware may be vitrified or semi-vitrified; the latter type would not be impermeable without glaze. [ 15 ] [ 3 ] [ 16 ]
When sucrose is cooled slowly it results in crystal sugar (or rock candy ), but when cooled rapidly it can form syrupy cotton candy (candyfloss).
Vitrification can also occur in a liquid such as water, usually through very rapid cooling or the introduction of agents that suppress the formation of ice crystals. This is in contrast to ordinary freezing which results in ice crystal formation. Vitrification is used in cryo-electron microscopy to cool samples so quickly that they can be imaged with an electron microscope without damage. [ 17 ] [ 18 ] In 2017, the Nobel prize for chemistry was awarded for the development of this technology, which can be used to image objects such as proteins or virus particles. [ 19 ]
Ordinary soda-lime glass , used in windows and drinking containers, is created by the addition of sodium carbonate and lime ( calcium oxide ) to silicon dioxide . Without these additives, silicon dioxide would require very high temperature to obtain a melt, and subsequently (with slow cooling) a glass.
Vitrification is used in disposal and long-term storage of nuclear waste or other hazardous wastes [ 20 ] in a method called geomelting . Waste is mixed with glass-forming chemicals in a furnace to form molten glass that then solidifies in canisters, thereby immobilizing the waste. The final waste form resembles obsidian and is a non- leaching , durable material that effectively traps the waste inside. It is widely assumed that such waste can be stored for relatively long periods in this form without concern for air or groundwater contamination . Bulk vitrification uses electrodes to melt soil and wastes where they lie buried. The hardened waste may then be disinterred with less danger of widespread contamination. According to the Pacific Northwest National Labs , "Vitrification locks dangerous materials into a stable glass form that will last for thousands of years." [ 21 ]
Vitrification in cryopreservation is used to preserve, for example, human egg cells ( oocytes ) (in oocyte cryopreservation ) and embryos (in embryo cryopreservation ). It prevents ice crystal formation and is a very fast process: -23,000 °C/min.
Currently, vitrification techniques have only been applied to brains ( neurovitrification ) by Alcor and to the upper body by the Cryonics Institute , but research is in progress by both organizations to apply vitrification to the whole body.
Many woody plants living in polar regions naturally vitrify their cells to survive the cold. Some can survive immersion in liquid nitrogen and liquid helium . [ 22 ] Vitrification can also be used to preserve endangered plant species and their seeds. For example, recalcitrant seeds are considered hard to preserve. Plant vitrification solution (PVS), one of application of vitrification, has successfully preserved Nymphaea caerulea seeds. [ 23 ]
Additives used in cryobiology or produced naturally by organisms living in polar regions are called cryoprotectants .
|
https://en.wikipedia.org/wiki/Vitrification
|
Vitrimers are a class of plastics , which are derived from thermosetting polymers (thermosets) and are very similar to them. Vitrimers consist of molecular, covalent networks, which can change their topology by thermally activated bond-exchange reactions. At high temperatures, they can flow like viscoelastic liquids; at low temperatures, the bond-exchange reactions are immeasurably slow ( frozen ), and the Vitrimers behave like classical thermosets at this point. Vitrimers are strong glass formers. Their behavior opens new possibilities in the application of thermosets, such as a self-healing material or simple processibility in a wide temperature range. [ 1 ] [ 2 ] [ 3 ]
Besides epoxy resins based on diglycidyl ether of bisphenol A , other polymer networks have been used to produce vitrimers, such as aromatic polyesters, [ 4 ] [ 5 ] polylactic acid (polylactide), [ 2 ] polyhydroxyurethanes , [ 3 ] epoxidized soybean oil with citric acid , [ 6 ] and polybutadiene . [ 7 ] Vitrimers were termed as such in the early 2010s by French researcher Ludwik Leibler from the CNRS . [ 8 ]
Thermoplastics are easy to process, but corrode easily by chemicals and mechanical stress, while the opposite is true of thermosets. These differences arise from how the polymer chains are held together.
Historically, thermoset polymer systems that were processable by virtue of topology changes within the covalent networks as mediated by bond exchange reactions were also developed by James Economy’s group at UIUC in the 1990s [ 4 ] [ 5 ] including consolidation of thermoset composite laminae. [ 9 ] As well, the Economy group conducted studies employing secondary ion mass spectrometry (SIMS) on deuterated and undeuterated fully cured vitrimer layers to discriminate the length scales (<50 nm) for physical interdiffusion between vitrimers constituent atoms – providing evidence towards eliminating physical interdiffusion of the polymer chains as the governing mechanism for bonding between vitrimer layers. [ 10 ]
Thermoplastics are made of covalent bond molecule chains, which are held together by weak interactions (e.g., van der Waals forces ). The weak intermolecular interactions lead to easy processing by melting (or in some cases also from solution ), but also make the polymer susceptible to solvent degradation and to creep under constant load. Thermoplastics can be deformed reversibly above their glass-transition temperature or their crystalline melting point and be processed by extrusion , injection molding , and welding .
Thermosets, on the other hand, are made of molecular chains which are interconnected by covalent bonds to form a stable network. Thus, they have outstanding mechanical properties and thermal and chemical resistance. They are an indispensable part of structural components in automotive and aircraft industries. Due to their irreversible linking by covalent bonds, molding is not possible once the polymerization is completed. Therefore, they must be polymerized in the desired shape, which is time-consuming, restricts the shape and is responsible for their high price. [ 11 ]
Given this, if the chains can be held together with reversible, strong covalent bonds, the resultant polymer would have the advantages of both thermoplastics and thermosets, including high processability, repairability, and performance. Vitrimers combine the desirable properties of both classes: they have the mechanical and thermal properties of thermosets and can be also molded under the influence of heat. Vitrimers can be welded like silicon glasses or metals . Welding by simple heating allows the creation of complex objects. [ 10 ] [ 12 ] Vitrimers could thus be a new and promising class of materials with many uses. [ 13 ]
The term vitrimer was created by the French researcher Ludwik Leibler , head of laboratory at CNRS , France 's national research institute. [ 14 ] In 2011, Leibler and co-workers developed silica-like networks using the well-established transesterification reaction of epoxy and fatty dicarboxylic or tricarboxylic acids. [ 11 ] The synthesized networks have both hydroxyl and ester groups, which undergo exchange reactions ( transesterifications ) at high temperatures, resulting in the ability of stress relaxation and malleability of the material. On the other hand, the exchange reactions are suppressed to a great extent when the networks are cooled down, leading to a behavior like a soft solid. This whole process is based only on exchange reactions, which is the main difference from that of thermoplastics .
If the melt of an (organic) amorphous polymer is cooled down, it solidifies at the glass-transition temperature T g . On cooling, the hardness of the polymer increases in the neighborhood of T g by several orders of magnitude . This hardening follows the Williams-Landel-Ferry equation , not the Arrhenius equation . Organic polymers are thus called fragile glass formers . Silicon glass (e.g., window glass), is in contrast labelled as a strong glass former. Its viscosity changes only very slowly in the vicinity of the glass-transition point T g and follows the Arrhenius law. This is what permits glassblowing. If one would try to shape an organic polymer in the same manner as glass, it would at first firmly and fully liquefy very slightly above T g . For a theoretical glassblowing of organic polymers, the temperature must be controlled very precisely.
Until 2010, no organic strong glass formers were known. Strong glass formers can be shaped in the same way as glass (silicon dioxide) can be. Vitrimers are the first such material discovered, which can behave like viscoelastic fluid at high temperatures. Unlike classical polymer melts, whose flow properties are largely dependent on friction between monomers, vitrimers become a viscoelastic fluid because of exchange reactions at high temperatures as well as monomer friction. [ 11 ] These two processes have different activation energies , resulting in a wide range of viscosity variation. Moreover, because the exchange reactions follow Arrhenius' Law , the change of viscosity of vitrimers also follows an Arrhenius relationship with the increase of temperature, differing greatly from conventional organic polymers.
The research group led by Ludwik Leibler demonstrated the operating principle of vitrimers at the example of epoxy thermosets. Epoxy thermosets can be represented as vitrimers, when transesterification reactions can be introduced and controlled. In the studied system, carboxylic acids or carboxylic acid anhydrides must be used as hardeners. [ 13 ] A topology change is possible by transesterification reactions which do not affect the number of links or the (average) functionality of the polymer, meaning that neither the decomposition of polymer linkages nor the decrease of integrity of polymers happens when transesterification reactions take place. Thus, the polymer can flow like a viscoelastic liquid at high temperatures. During the cooling phase, the transesterification reactions are slowed down, until they finally freeze (be immeasurably slow). Below this point vitrimers behave like normal, classical thermosets. The shown case-study polymers showed an elastic modulus of 1 MPa to 100 MPa, depending on the bonding network density.
The concentration of ester groups in vitrimers is shown to have a huge influence on the rate of transesterification reactions. In the work done by Hillmyer, et al., about polyactide vitrimers, they demonstrated that the more ester groups present in the polymer, the faster the rates of relaxation will be, leading to better self-healing performance. [ 2 ] Polyactide vitrimers which are synthesized by cross linking reactions of hydroxylterminated 4-arm star-shaped poly((±)-lactide) (HTSPLA) and methylenediphenyl diisocyanate (MDI) with the presence of cross-linking and transesterification catalyst stannous(II) octoate [Sn(Oct) 2 ], have many more ester groups than all previous vitrimers; therefore, this material has a significantly high stress relaxing rate compared to other polyester based vitrimer systems.
There are many uses imaginable on this basis. A surfboard made of vitrimers could be brought into a new shape, scratches on a car body could be cured and cross-linked plastic or synthetic rubber items could be welded. Vitrimers, which are prepared from metathesis of dioxaborolanes with different commercially available polymers, can have both good processibility and outstanding performance, such as mechanical, thermal, and chemical resistance. [ 15 ] [ 16 ] The polymers that can be utilized in such methodology range from poly(methylmethacrylate) , polyimine , polystyrene , to polyethylene with high density and cross-linked robust structures, which makes this preparative method of vitrimers able to be applied to a wide range of industries. Recent NASA-funded work on reversible adhesives for in-space assembly has used a high-performance vitrimer system called aromatic thermosetting copolyester (ATSP) as the basis for coatings and composites reversibly bondable in the solid state – providing new possibilities for the assembly of large, complex structures for space exploration and development. [ 17 ] [ 18 ] Start-up Mallinda Inc. claims to have applications across the composites market from wind energy, sporting goods, automotive, aerospace, marine, and carbon fiber reinforced pressure vessels among others.
|
https://en.wikipedia.org/wiki/Vitrimers
|
The Vitruvian Man ( Italian : L'uomo vitruviano ; [ˈlwɔːmo vitruˈvjaːno] ) is a drawing by the Italian Renaissance artist and scientist Leonardo da Vinci , dated to c. 1490 . Inspired by the writings of the ancient Roman architect Vitruvius , the drawing depicts a nude man in two superimposed positions with his arms and legs apart and inscribed in both a circle and square. It was described by the art historian Carmen C. Bambach as "justly ranked among the all-time iconic images of Western civilization ". [ 1 ] Although not the only known drawing of a man inspired by the writings of Vitruvius, the work is a unique synthesis of artistic and scientific ideals and often considered an archetypal representation of the High Renaissance .
The drawing represents Leonardo's conception of ideal body proportions , originally derived from Vitruvius but influenced by his own measurements, the drawings of his contemporaries, and the De pictura treatise by Leon Battista Alberti . Leonardo produced the Vitruvian Man in Milan and the work was probably passed to his student Francesco Melzi . It later came into the possession of Venanzio de Pagave, who convinced the engraver Carlo Giuseppe Gerli to include it in a book of Leonardo's drawings, which widely disseminated the previously little-known image. It was later owned by Giuseppe Bossi , who wrote early scholarship on it, and eventually sold to the Gallerie dell'Accademia of Venice in 1822, where it has remained since. Due to its sensitivity to light, the drawing rarely goes on public display, but it was borrowed by the Louvre in 2019 for their exhibition marking the 500th anniversary of Leonardo's death.
The drawing is described by Leonardo's notes as Le proporzioni del corpo umano secondo Vitruvio , [ 2 ] variously translated as The Proportions of the Human Figure after Vitruvius , [ 3 ] or Proportional Study of a Man in the Manner of Vitruvius . [ 4 ] It is much better known as the Vitruvian Man . [ 2 ] The art historian Carlo Pedretti lists it as Homo Vitruvius, study of proportions with the human figure inscribed in a circle and a square , and later as simply Homo Vitruvius . [ 5 ]
The drawing was executed primarily with pen and light-brown ink, while there are traces of brown wash (watercolor). [ 6 ] [ n 1 ] The paper measures 34.4 cm × 25.5 cm (13.5 in × 10.0 in), larger than most of Leonardo's folio manuscript sheets, [ n 2 ] while the paper itself was originally made somewhat unevenly, given its irregular edges. [ 1 ] Close examination of the drawing reveals that it was meticulously prepared, and is devoid of "sketchy and tentative" lines. [ 8 ] Leonardo used metalpoint with a calipers and compass to make precise lines, and small tick marks were used for measurements. [ 6 ] [ 8 ] These compass marks demonstrate an inner structure of "measured intervals" which is displayed in tandem with the general structure created by the geometric figures. [ 9 ]
The Vitruvian Man depicts a nude man facing forward and surrounded by a square, while superimposed on a circle. [ 2 ] The man is portrayed in different stances simultaneously: His arms are stretched above his shoulders and then perpendicular to them, while his legs are together and also spread out along the circle's base. [ 2 ] The scholar Carlo Vecce notes that this approach displays multiple phases of movement at once, akin to a photograph. [ 10 ] The man's fingers and toes are arranged carefully as to not breach the surrounding shapes. [ 9 ] Commentators often note that Leonardo went out of his way to create an artistic depiction of the man, rather than a simple portrayal. [ 11 ] [ 12 ] According to the biographer Walter Isaacson , the use of delicate lines, an intimate stare, and intricate hair curls, "weaves together the human and the divine". [ 11 ] Pedretti notes close similarities between the man and the angel of Leonardo's earlier Annunciation painting. [ 12 ]
The text above the image reads:
Vetruvio, architecto, mecte nella sua op(er)a d'architectura, chelle misure dell'omo sono dalla natura disstribuite inquessto modo cioè che 4 diti fa 1 palmo, et 4 palmi fa 1 pie, 6 palmi fa un chubito, 4 cubiti fa 1 homo, he 4 chubiti fa 1 passo, he 24 palmi fa 1 homo ecqueste misure son ne' sua edifiti. Settu ap(r)i ta(n)to le ga(m)be chettu chali da chapo 1/14 di tua altez(z)a e ap(r)i e alza tanto le b(r)acia che cholle lunge dita tu tochi la linia della somita del chapo, sappi che 'l cie(n)tro delle stremita delle ap(er)te me(m)bra fia il bellicho. Ello spatio chessi truova infralle ga(m)be fia tria(n)golo equilatero [ 13 ]
Vitruvius, the architect, says in his architectural work that the measurements of man are in nature distributed in this manner, that is 4 fingers make a palm, 4 palms make a foot, 6 palms make a cubit, 4 cubits make a man, 4 cubits make a footstep, 24 palms make a man and these measures are in his buildings. If you open your legs enough that your head is lowered by 1/14 of your height and raise your arms enough that your extended fingers touch the line of the top of your head, let you know that the center of the ends of the open limbs will be the navel, and the space between the legs will be an equilateral triangle [ 13 ]
And below:
Tanto ap(r)e l'omo nele b(r)accia, qua(n)to ella sua alteza. Dal nasscimento de chapegli al fine di sotto del mento è il decimo dell'altez(z)a del(l)'uomo. Dal di socto del mento alla som(m)ità del chapo he l'octavo dell'altez(z)a dell'omo. Dal di sop(r)a del pecto alla som(m)ità del chapo fia il sexto dell'omo. Dal di sop(r)a del pecto al nasscime(n)to de chapegli fia la sectima parte di tucto l'omo. Dalle tette al di sop(r)a del chapo fia la quarta parte dell'omo. La mag(g)iore larg(h)ez(z)a delle spalli chontiene insè [la oct] la quarta parte dell'omo. Dal gomito alla punta della mano fia la quarta parte dell'omo, da esso gomito al termine della isspalla fia la octava parte d'esso omo; tucta la mano fia la decima parte dell'omo. Il menb(r)o birile nasscie nel mez(z)o dell'omo. Il piè fia la sectima parte dell'omo. Dal di socto del piè al di socto del ginochio fia la quarta parte dell'omo. Dal di socto del ginochio al nasscime(n)to del memb(r)o fia la quarta parte dell'omo. Le parti chessi truovano infra il me(n)to e 'l naso e 'l nasscime(n)to de chapegli e quel de cigli ciasscuno spatio p(er)se essimile alloreche è 'l terzo del volto [ 14 ]
The length of the outspread arms is equal to the height of the man. From the hairline to the bottom of the chin is one-tenth of the height of the man. From below the chin to the top of the head is one-eighth of the height of the man. From above the chest to the top of the head is one-sixth of the height of the man. From above the chest to the hairline is one-seventh of the height of a man. From the chest to the head is a quarter of the height of the man. The maximum width of the shoulders contains a quarter of the man. From the elbow to the tip of the hand is a quarter of the height of a man; the distance from the elbow to the armpit is one-eighth of the height of the man; the length of the hand is one-tenth of the man. The virile member is at the half height of the man. The foot is one-seventh of the man. From below the foot to below the knee is a quarter of the man. From below the knee to the root of the member is a quarter of the man. The distances from the chin to the nose and the hairline and the eyebrows are equal to the ears and one-third of the face [ 14 ]
The moderately successful architect and engineer Vitruvius lived from c. 80 – c. 20 BCE, primarily in the Roman Republic . [ 15 ] He is best known for authoring De architectura ( On Architecture ), later called the Ten Books on Architecture , which is the only substantial architecture treatise that survives from antiquity. [ 16 ] The work's third volume includes a discussion concerning body proportions , [ 1 ] where the figures of a man in a circle and a square are respectively referred to as homo ad circulum , homo ad quadratum . [ 15 ] Vitruvius explained that:
In a temple there ought to be harmony in the symmetrical relations of the different parts to the whole. In the human body, the central point is the navel. If a man is placed flat on his back, with his hands and feet extended, and a compass centered at his navel, his fingers and toes will touch the circumference of a circle thereby described. And just as the human body yields a circular outline, so too a square may be found from it. For if we measure the distance from the soles of the feet to the top of the head, and then apply that measure to the outstretched arms, the breadth will be found to be the same as the height, as in the case of a perfect square.
19th-century historians often postulated that Leonardo had no substantial inspiration from the ancient world, propagating his stance as a "modern genius" who rejected all of classicism. [ 18 ] This has been heavily disproven by many documented accounts from Leonardo's colleagues or records of him either owning, reading, and being influenced by writings from antiquity. [ 18 ] The treatise of Vitruvius was long kept obscurely in monk's manuscript copies, but "rediscovered" in the 15th century by Poggio Bracciolini among works such as De Rerum natura . [ 16 ] Many artists then attempted to design figures which would satisfy Vitruvius' description, with the earliest being three such images by Francesco di Giorgio Martini around the 1470s. [ 19 ] [ 2 ] Leonardo may have been influenced by the architect Giacomo Andrea , with whom he records as having dined within 1490. [ 20 ] Andrea created his own Vitruvian Man drawing that year, which was unknown to scholars until the 1980s. [ 20 ]
Leonardo's version of the Vitruvian Man corrected inaccuracies in Vitruvius's account, particularly related to the head, due to use of book two of the De pictura by Leon Battista Alberti . [ 1 ] Earlier drawings of the same subject "assumed that the circle and square should be centered around the navel", akin to Vitruvius's account, while Leonardo made the scheme work by using the man's genitals as the center of the square, and the navel as the center of the circle. [ 9 ] It is likely that Leonardo's drawings dated to 1487–1490, and entitled The proportions of the arm , were related to the Vitruvian Man , possibly serving as preparatory sketches. [ 21 ]
Some commentators have speculated that Leonardo incorporated the golden ratio in the drawing, possibly due to his illustrations of Luca Pacioli 's Divina proportione , partially plagiarized from Piero della Francesca , [ 22 ] [ n 3 ] concerning the ratio. [ 23 ] [ 24 ] However, the Vitruvian Man is likely to have been drawn before Leonardo met Pacioli, and there has been doubt over the accuracy of such an observation. [ 25 ] As architectural scholar Vitor Murtinho explains, a circle tangent to the base of a square, with the radius and square sides related by the golden ratio, would pass exactly through the top two corners of the square, unlike Leonardo's drawing. He suggests instead constructions based on a regular octagon or on the vesica piscis . [ 25 ]
Leonardo's drawing is almost always dated to around 1490 during his first Milanese period . [ 15 ] [ 26 ] The exact dating is not completely agreed upon and earlier generations of art historians, including Arthur E. Popham , frequently dated the work anywhere from 1485 to 1490. [ 1 ] Two leading art historians differ in this respect; Martin Kemp gives c. 1487 , [ 4 ] [ n 4 ] while Carmen C. Bambach contends that the earliest possible date—which "one may not entirely discount"—is 1488. [ 1 ] Bambach, in addition to Pedretti, Giovanna Nepi Scirè and Annalisa Perissa Torrini give a slightly broader range of c. 1490–1491 . [ 7 ] Bambach explains that this range fits "best with the manner of exact, engraving-like parallel hatching contained within robust pen-and-ink outlines, over traces of lead paint, stylus-ruling, and compass composition". [ 1 ]
After Leonardo's death, the drawing most likely passed to his student Francesco Melzi (1491–1570), [ 1 ] who was bequeathed most of Leonardo's possessions. [ 27 ] From then on, the drawing's provenance history is almost certain: it found its way to Cesare Monti (1594–1650), was passed to his heir Anna Luisa Monti, then to the De Page family, first Venanzio de Pagave [ Wikidata ] (in 1777) and then his son Gaudenzio de Page. [ 1 ] [ 28 ] While owned by the elder De Page, he convinced the engraver Carlo Giuseppe Gerli to publish a book of Leonardo's drawings, which would be the first widespread dissemination of the Vitruvian Man and many other Leonardo drawings. [ 29 ] The younger de Page sold the drawing to Giuseppe Bossi , who described, discussed, and illustrated it in the fourth chapter of his 1810 monograph on Leonardo's The Last Supper , Del Cenacolo di Leonardo da Vinci ( On The Last Supper of Leonardo da Vinci ). [ 30 ] This chapter was published as a stand-alone study the next year Delle opinion di Leonardo da Vinci intorno alla simmetria de' corpi umani ( On the opinions of Leonardo da Vinci regarding the symmetry of human bodies ). [ 30 ] After Bossi's death in 1815, the drawing was sold to the abbot Luigi Celotti in 1818, and entered into the Venetian Gallerie dell'Accademia 's collection in 1822, where it has since remained. [ 1 ] Because of its high artistic quality and its well-recorded history of provenance, Leonardo's authorship of the Vitruvian Man has never been doubted. [ 1 ]
The Vitruvian Man is rarely displayed as extended exposure to light would cause fading; it is kept on the fourth floor of the Gallerie dell'Accademia, in a locked room. [ 31 ] In 2019, the Louvre requested to borrow the drawing for their monumental Léonard de Vinci exhibition, which celebrated the 500th anniversary of the artist's death. [ 32 ] They faced substantial resistance from the heritage group Italia Nostra , who contended that the drawing was too fragile to be transported, and filed a lawsuit. [ 33 ] At a hearing on 16 October 2019, a judge ruled that the group had not proven their claim, but set a maximum amount of light for the drawing to be exposed to as well as a subsequent rest period to offset its overall exposure to light. [ 34 ] The Louvre promised to lend paintings by Raphael to Italy for his own 500th death anniversary; Italy's Minister for Cultural Affairs Dario Franceschini stated that "Now a great cultural operation can start between Italy and France on the two exhibitions about Leonardo in France and Raphael in Rome." [ 35 ]
In 2022, the Gallerie dell'Accademia, which owns the drawing, sued German jigsaw puzzle manufacturer Ravensburger for reproducing the artwork in one of the company's jigsaw puzzles. Ravensburger started selling the 1,000-piece jigsaw puzzle in Italy in 2009 and in 2019 the museum sent the company a cease-and-desist letter and demanded 10% of the revenue. Ravensburger refused to comply and subsequently was sued by the museum under Italy's 2004 Cultural Heritage and Landscape Code [ it ] which governs reproductions of works deemed to be under Italy's cultural heritage. In its objections, the German company claimed that it had the right to reproduce the artwork because it was already in the public domain for centuries and that the reproduction occurred outside Italy and thus not subject to Italy's Cultural Heritage Code. An Italian court rejected Ravensburger's arguments and decided in favor of the museum. [ 36 ] In a ruling dated 17 November 2022, the court ordered the puzzle company to cease producing the product for commercial purposes and levied a fine of 1,500 euros for every day that the company failed to comply. [ 37 ] [ 38 ] [ 39 ] In March 2024, a German court ruled in favor of the company, stating that the Cultural Heritage Code is not applicable outside Italy, and therefore a violation of the sovereignty of the individual states. In response, an Italian government official argues they will challenge this "abnormal" German ruling even before the European and international courts. [ 40 ]
Licensing fees for famous artworks are an important source of income for Italian museums, and Italian law says that museums owning famous public domain works hold the copyright on those works forever and can control who is allowed to make copies and derivative works of them. [ 36 ]
The Vitruvian Man is often considered an archetypal representative of the High Renaissance , just as Leonardo himself came to represent the archetypal Renaissance man . [ 41 ] It holds a unique distinction in aligning art, mathematics, science, classicism, and naturalism. [ 42 ] The art historian Ludwig Heinrich Heydenreich , writing for Encyclopædia Britannica , states, "Leonardo envisaged the great picture chart of the human body he had produced through his anatomical drawings and Vitruvian Man as a cosmografia del minor mondo (' cosmography of the microcosm '). He believed the workings of the human body to be an analogy, in microcosm, for the workings of the universe." [ 43 ]
Kemp calls the drawing "the world's most famous drawing", [ 9 ] while Bambach describes it as "justly ranked among the all-time iconic images of Western civilization ". [ 1 ] Reflecting on its fame, Bambach further stated in 2019 that "the endless recent fetishizing of the image by modern commerce through ubiquitous reproductions (in popular books, advertising, and the Euro coin ) has kidnapped it from the realm of Renaissance drawing, making it difficult for the viewer to appreciate it as a work of nuanced, creative expression." [ 1 ]
|
https://en.wikipedia.org/wiki/Vitruvian_Man
|
Vittel is a French brand of bottled water sold in many countries. [ 1 ] [ 2 ] Since 1992 it has been owned by the Swiss company Nestlé . [ 3 ] [ 4 ] It is among the leading French mineral water companies, along with Perrier and Evian . [ 5 ]
Vittel is produced using mineral water that is sourced from the "Great Spring" in Vittel , France, and has been bottled and made available for curative and, increasingly, for commercial purposes since 1854. [ 1 ] [ 6 ]
|
https://en.wikipedia.org/wiki/Vittel_(water)
|
Vivergo Fuels is a bio-ethanol producer, headquartered in Hessle , East Riding of Yorkshire , but whose plant is based at Salt End , Kingston upon Hull , England. The company produces bio-fuels from locally sourced wheat and besides producing bio-ethanol, a by-product of animal feed is also part of the bio-fuel process. The company's plant was subject to a shutdown between November 2017 and April 2018 whilst demand for their product was low. The company blamed the United Kingdom government for not ruling that bio-fuel additives to petrol should be greater than 4.75%.
It is the largest manufacturer of bio-ethanol in the United Kingdom and the second largest producer in Europe. [ 3 ]
Vivergo was first proposed in 2007 as a joint venture between AB Sugar , BP and DuPont . The company had £350 million ($400 million) invested into it and opened for business in July 2013, [ 4 ] [ 5 ] with both AB Sugar and BP taking a 47% share and DuPont the remaining 6%. [ 6 ] In May 2015, BP pulled out of the venture and sold its stake to AB Sugar, giving them 94% of the company. [ 7 ] [ 8 ]
The construction phase was beset by industrial action in March 2011; Vivergo had employed a company to build the plant, but it was behind schedule and so fired the company and sought another contractor to complete the task. This left 400 workers unemployed and the GMB union believed that Vivergo should continue to employ the workers whilst the search for a new contractor was completed. [ 9 ] Redhall, a Wakefield-based company, was awarded the £18 million contract to design and build the plant in February 2010. The project was to have been completed by the end of 2010, but by the time of the industrial action, it was four months behind schedule. [ 10 ] Redhall later successfully sued Vivergo for breach of contract. [ 11 ]
The company receives over 1,100,000 tonnes (1,200,000 tons) of wheat per year and from that produces 420,000,000 litres (92,000,000 imp gal; 110,000,000 US gal) of bio-ethanol with 500,000 tonnes (550,000 tons) of animal feed as by-product. The wheat is sourced from over 900 farms across Yorkshire and Lincolnshire with the bulk coming from the East Riding of Yorkshire. Wheat sourced from this region is high in starch which makes it ideal to process into bio-ethanol. [ 12 ] The animal feed is sold on to over 800 farms across the United Kingdom. When the plant was opened, Frontier Agriculture had an exclusive contract to supply the transport from farms to the Vivergo plant. [ 13 ]
The plant was deliberately located on the Humber Estuary to take advantage of the ability of the east coast ports to export bulk liquids via ship-borne transport. Its location close to the major wheat producing areas in eastern England made it ideal. The next rival in terms of bio-fuels in the United Kingdom, is the Ensus plant on Teesside , which whilst producing less bio-ethanol and animal feed, also produces over 300,000 tonnes (330,000 tons) of carbon dioxide gas for the drinks industry, something that Vivergo does not. [ 14 ] This makes Vivergo the largest producer of bio-ethanol in the United Kingdom and the second largest producer in Europe. [ 15 ]
In November 2017, the plant was subject to an enforced closure by the company. Vivergo claimed that the business was unsustainable due to the government not adhering to its own Renewable Transport Fuel Obligation (RTFO) policy. Currently, the company produces E5, an up 5% blending product that is meant to be mixed with petrol in a 5:95% mix. [ 16 ] Vivergo wish to produce E10, this would see an increase from 4.75% bio-fuels additives into petrol to 9.75% by 2020. After some government debate and agreement, the RTFO was adopted by the government, and was implemented in April 2018. The plant re-opened for business in April 2018 [ 1 ] with E10 becoming law by 2020. [ 17 ]
In April 2025, Vivergo owner ABF said it was exploring options for the future of the site, including mothballing or closing the facility, blaming the move on the government “undermining” its viability. [ 18 ]
|
https://en.wikipedia.org/wiki/Vivergo_Fuels
|
Viviani's theorem , named after Vincenzo Viviani , states that the sum of the shortest distances from any interior point to the sides of an equilateral triangle equals the length of the triangle's altitude . [ 1 ] It is a theorem commonly employed in various math competitions, secondary school mathematics examinations, and has wide applicability to many problems in the real world.
This proof depends on the readily-proved proposition that the area of a triangle is half its base times its height—that is, half the product of one side with the altitude from that side. [ 2 ]
Let ABC be an equilateral triangle whose height is h and whose side is a .
Let P be any point inside the triangle, and s, t, u the perpendicular distances of P from the sides. Draw a line from P to each of A, B, and C, forming three triangles PAB, PBC, and PCA.
Now, the areas of these triangles are u ⋅ a 2 {\displaystyle {\frac {u\cdot a}{2}}} , s ⋅ a 2 {\displaystyle {\frac {s\cdot a}{2}}} , and t ⋅ a 2 {\displaystyle {\frac {t\cdot a}{2}}} . They exactly fill the enclosing triangle, so the sum of these areas is equal to the area of the enclosing triangle.
So we can write:
and thus
Q.E.D.
The converse also holds: If the sum of the distances from an interior point of a triangle to the sides is independent of the location of the point, the triangle is equilateral. [ 3 ]
Viviani's theorem means that lines parallel to the sides of an equilateral triangle give coordinates for making ternary plots , such as flammability diagrams .
More generally, they allow one to give coordinates on a regular simplex in the same way.
The sum of the distances from any interior point of a parallelogram to the sides is independent of the location of the point. The converse also holds: If the sum of the distances from a point in the interior of a quadrilateral to the sides is independent of the location of the point, then the quadrilateral is a parallelogram. [ 3 ]
The result generalizes to any 2 n -gon with opposite sides parallel. Since the sum of distances between any pair of opposite parallel sides is constant, it follows that the sum of all pairwise sums between the pairs of parallel sides, is also constant. The converse in general is not true, as the result holds for an equilateral hexagon, which does not necessarily have opposite sides parallel.
If a polygon is regular (both equiangular and equilateral ), the sum of the distances to the sides from an interior point is independent of the location of the point. Specifically, it equals n times the apothem , where n is the number of sides and the apothem is the distance from the center to a side. [ 3 ] [ 4 ] However, the converse does not hold; the non-square parallelogram is a counterexample . [ 3 ]
The sum of the distances from an interior point to the sides of an equiangular polygon does not depend on the location of the point. [ 1 ]
A necessary and sufficient condition for a convex polygon to have a constant sum of distances from any interior point to the sides is that there exist three non-collinear interior points with equal sums of distances. [ 1 ]
The sum of the distances from any point in the interior of a regular polyhedron to the sides is independent of the location of the point. However, the converse does not hold, not even for tetrahedra . [ 3 ]
|
https://en.wikipedia.org/wiki/Viviani's_theorem
|
In plants, vivipary occurs when seeds or embryos begin to develop before they detach from the parent. Plants such as some Iridaceae and Agavoideae grow cormlets in the axils of their inflorescences. These fall and in favourable circumstances they have effectively a whole season's start over fallen seeds. Similarly, some Crassulaceae , such as Bryophyllum , develop and drop plantlets from notches in their leaves, ready to grow. Such production of embryos from somatic tissues is asexual vegetative reproduction that amounts to cloning .
Most seed-bearing fruits produce a hormone that suppresses germination until after the fruit or parent plant dies, or the seeds pass through an animal's digestive tract. At this stage, the hormone's effect will dissipate and germination will occur once conditions are suitable. Some species lack this suppressant hormone as a central part of their reproductive strategy. For example, fruits that develop in climates without large seasonal variations. [ 1 ] This phenomenon occurs most frequently on ears of corn, tomatoes, strawberries, peppers, pears, citrus fruits, and plants that grow in mangrove environments. [ 2 ]
In some species of mangroves , for instance, the seed germinates and grows from its own resources while still attached to its parent. Seedlings of some species are dispersed by currents if they drop into the water, but others develop a heavy, straight taproot that commonly penetrates mud when the seedling drops, thereby effectively planting the seedling. This contrasts with the examples of vegetative reproduction mentioned above, in that the mangrove plantlets are true seedlings produced by sexual reproduction . [ citation needed ]
In some trees, like jackfruit , some citrus, and avocado, the seeds can be found already germinated while the fruit goes overripe; strictly speaking this condition cannot be described as vivipary [ citation needed ] , but the moist and humid conditions provided by the fruit mimic a wet soil that encourages germination. However, the seeds also can germinate under moist soil. [ 3 ]
Vivipary includes reproduction via embryos, such as shoots or bulbils , as opposed to germinating externally from a dropped, dormant seed , as is usual in plants; [ 4 ] [ 5 ]
A few plants are pseudoviviparous – instead of reproducing with seeds, there are monocots that can reproduce asexually by creating new plantlets in their spikelets . [ 6 ] Examples are seagrass species belonging to the genus Posidonia [ 7 ] and the alpine meadow-grass, Poa alpina . [ 8 ]
|
https://en.wikipedia.org/wiki/Vivipary
|
Vivisection (from Latin vivus ' alive ' and sectio ' cutting ' ) is surgery conducted for experimental purposes on a living organism, typically animals with a central nervous system , to view living internal structure. The word is, more broadly, used as a pejorative [ 1 ] catch-all term for experimentation on live animals [ 2 ] [ 3 ] [ 4 ] by organizations opposed to animal experimentation, [ 5 ] but the term is rarely used by practicing scientists. [ 3 ] [ 6 ] Human vivisection, such as live organ harvesting , has been perpetrated as a form of torture . [ 7 ] [ 8 ]
Research requiring vivisection techniques that cannot be met through other means is often subject to an external ethics review in conception and implementation, and in many jurisdictions use of anesthesia is legally mandated for any surgery likely to cause pain to any vertebrate . [ 9 ]
In the United States, the Animal Welfare Act explicitly requires that any procedure that may cause pain use "tranquilizers, analgesics, and anesthetics" with exceptions when "scientifically necessary". [ 10 ] The act does not define "scientific necessity" or regulate specific scientific procedures, [ 11 ] but approval or rejection of individual techniques in each federally funded lab is determined on a case-by-case basis by the Institutional Animal Care and Use Committee , which contains at least one veterinarian, one scientist, one non-scientist, and one other individual from outside the university. [ 12 ]
In the United Kingdom, any experiment involving vivisection must be licensed by the Home Secretary . The Animals (Scientific Procedures) Act 1986 "expressly directs that, in determining whether to grant a licence for an experimental project, 'the Secretary of State shall weigh the likely adverse effects on the animals concerned against the benefit likely to accrue. ' " [ 13 ]
In Australia , the Code of Practice "requires that all experiments must be approved by an Animal Experimentation Ethics Committee" that includes a "person with an interest in animal welfare who is not employed by the institution conducting the experiment, and an additional independent person not involved in animal experimentation." [ 14 ]
Anti-vivisectionists have played roles in the emergence of the animal welfare and animal rights movements, arguing that animals and humans have the same natural rights as living creatures, and that it is inherently immoral to inflict pain or injury on another living creature, regardless of the purpose or potential benefit to mankind. [ 5 ] [ 15 ]
At the turn of the 19th century, medicine was undergoing a transformation. The emergence of hospitals and the development of more advanced medical tools such as the stethoscope are but a few of the changes in the medical field. [ 16 ] There was also an increased recognition that medical practices needed to be improved, as many of the current therapeutics were based on unproven, traditional theories that may or may not have helped the patient recover. The demand for more effective treatment shifted emphasis to research with the goal of understanding disease mechanisms and anatomy. [ 16 ] This shift had a few effects, one of which was the rise in patient experimentation, leading to some moral questions about what was acceptable in clinical trials and what was not. An easy solution to the moral problem was to use animals in vivisection experiments, so as not to endanger human patients. This, however, had its own set of moral obstacles, leading to the anti-vivisection movement. [ 16 ]
One polarizing figure in the anti-vivisection movement was François Magendie . Magendie was a physiologist at the Académie Royale de Médecine in France, established in the first half of the 19th century. [ 16 ] Magendie made several groundbreaking medical discoveries, but was far more aggressive than some of his contemporaries in the use of animal experimentation. For example, the discovery of the different functionalities of dorsal and ventral spinal nerve roots was achieved by both Magendie, as well as a Scottish anatomist named Charles Bell . Bell used an unconscious rabbit because of "the protracted cruelty of the dissection", which caused him to miss that the dorsal roots were also responsible for sensory information. Magendie, on the other hand, used conscious, six-week-old puppies for his own experiments. [ 16 ] [ 17 ] While Magendie's approach would today be considered an abuse of animal rights, both Bell and Magendie used the same rationalization for vivisection: the cost of animal experimentation being worth it for the benefit of humanity. [ 17 ]
Many [ who? ] viewed Magendie's work as cruel and unnecessarily torturous. One note is that Magendie carried out many of his experiments before the advent of anesthesia, but even after ether was discovered it was not used in any of his experiments or classes. [ 16 ] Even during the period before anesthesia, other physiologists [ who? ] expressed their disgust with how he conducted his work. One such visiting American physiologist describes the animals as "victims" and the apparent sadism that Magendie displayed when teaching his classes. [ verify ] Magendie's experiments were cited in the drafting of the British Cruelty to Animals Act 1876 and Cruel Treatment of Cattle Act 1822 , otherwise known as Martin's Act. [ 16 ] The latter bill's namesake, Irish MP and well known anti-cruelty campaigner Richard Martin , called Magendie a "disgrace to Society" and his public vivisections "anatomical theatres" following a prolonged dissection of a greyhound which attracted wide public comment. [ 18 ] Magendie faced widespread opposition in British society, among the general public but also his contemporaries, including William Sharpey who described his experiments aside from cruel as "purposeless" and "without sufficient object", a feeling he claimed was shared among other physiologists. [ 19 ]
The Cruelty to Animals Act, 1876 in Britain determined that one could only conduct vivisection on animals with the appropriate license from the state, and that the work the physiologist was doing had to be original and absolutely necessary. [ 20 ] The stage was set for such legislation by physiologist David Ferrier . Ferrier was a pioneer in understanding the brain and used animals to show that certain locales of the brain corresponded to bodily movement elsewhere in the body in 1873. He put these animals to sleep, and caused them to move unconsciously with a probe. Ferrier was successful, but many decried his use of animals in his experiments. Some of these arguments came from a religious standpoint. Some were concerned that Ferrier's experiments would separate God from the mind of man in the name of science. [ 20 ] Some of the anti-vivisection movement in England had its roots in Evangelicalism and Quakerism. These religions already had a distrust for science, only intensified by the recent publishing of Darwin's Theory of Evolution in 1859. [ 17 ]
Neither side was pleased with how the Cruelty to Animals Act 1876 was passed. The scientific community felt as though the government was restricting their ability to compete with the quickly advancing France and Germany with new regulations. The anti-vivisection movement was also unhappy, but because they believed that it was a concession to scientists for allowing vivisection to continue at all. [ 20 ] Ferrier would continue to vex the anti-vivisection movement in Britain with his experiments when he had a debate with his German opponent, Friedrich Goltz. They would effectively enter the vivisection arena, with Ferrier presenting a monkey, and Goltz presenting a dog, both of which had already been operated on. Ferrier won the debate, but did not have a license, leading the anti-vivisection movement to sue him in 1881. Ferrier was not found guilty, as his assistant was the one operating, and his assistant did have a license. [ 20 ] Ferrier and his practices gained public support, leaving the anti-vivisection movement scrambling. They made the moral argument that given recent developments, scientists would venture into more extreme practices to operating on "the cripple, the mute, the idiot, the convict, the pauper, to enhance the 'interest' of [the physiologist's] experiments". [ 20 ]
In the early 20th-century the anti-vivisection movement attracted many female supporters associated with women's suffrage . [ 21 ] The American Anti-Vivisection Society advocated total abolition of vivisection whilst others such as the American Society for the Regulation of Vivisection wanted better regulation subjected to surveillance, not full prohibition. [ 21 ] [ 22 ] The Research Defence Society made up of an all-male group of physiologists was founded in 1908 to defend vivisection. [ 23 ] In the 1920s, anti-vivisectionists exerted significant influence over the editorial decisions of medical journals. [ 21 ]
It is possible that human vivisection was practised by some Greek anatomists in Alexandria in the 3rd century BCE. Celsus in De Medicina states that Herophilos of Alexandria vivisected some criminals sent by the king. The early Christian writer Tertullian states that Herophilos vivisected at least 600 live prisoners, although the accuracy of this claim is disputed by many historians. [ 24 ]
In the 12th century CE, Andalusian Arab Ibn Tufail elaborated on human vivisection in his treatise called Hayy ibn Yaqzan . In an extensive article on the subject, Iranian academic Nadia Maftouni believes him to be among the early supporters of autopsy and vivisection. [ 25 ]
Unit 731 , a biological and chemical warfare research and development unit of the Imperial Japanese Army , undertook lethal human experimentation during the period that comprised both the Second Sino-Japanese War and the Second World War (1937–1945). In the Filipino island of Mindanao , Moro Muslim prisoners of war were subjected to various forms of vivisection by the Japanese, in many cases without anesthesia. [ 8 ] [ 26 ]
Nazi human experimentation involved many medical experiments on live subjects, such as vivisections by Josef Mengele , [ 27 ] usually without anesthesia. [ 28 ]
|
https://en.wikipedia.org/wiki/Vivisection
|
Vixotrigine ( INN Tooltip International Nonproprietary Name , USAN Tooltip United States Adopted Name ), formerly known as raxatrigine ( INN Tooltip International Nonproprietary Name , USAN Tooltip United States Adopted Name ), is an analgesic which is under development by Convergence Pharmaceuticals for the treatment of lumbosacral radiculopathy (sciatica) and trigeminal neuralgia (TGN). [ 1 ] [ 2 ] [ 3 ] Vixotrigine was originally claimed to be a selective central Na v 1.3 blocker , but was subsequently redefined as a selective peripheral Na v 1.7 blocker. [ citation needed ] Following this, vixotrigine was redefined once again, as a non-selective voltage-gated sodium channel blocker. [ citation needed ] As of January 2018, it is in phase III clinical trials for trigeminal neuralgia and is in phase II clinical studies for erythromelalgia and neuropathic pain . [ 4 ] It was previously under investigation for the treatment of bipolar disorder , but development for this indication was discontinued. [ 4 ]
This analgesic -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vixotrigine
|
Vizastar was the first integrated software package for the Commodore 64 home computer . [ 1 ] At the time of its introduction it was the only package for the C64 with features comparable to Lotus 1-2-3 , including spreadsheet, database and graphics components. It had the ability to split or merge files between the database and spreadsheet components and could split the screen into up to nine windows. [ 2 ]
To alleviate the C64's somewhat limited memory capacity when running complex software, Vizastar included a ROM cartridge that provided an additional 4K RAM and also served as a form of copy protection . [ 1 ] This allowed the program to remain compatible with third-party floppy disk drives for the C64, such as the Indus GT and MSD Super Disk , unlike many copy protected packages. Commodore serial printers were supported, as were RS-232 and Centronics printers with the appropriate interface.
The spreadsheet had a maximum size of 1000 rows by 64 columns, or 64,000 cells, but because only about 10k of memory was left available after the program was loaded, not all the cells could be used. The additional cells provided flexibility in the dimensions of the spreadsheet. The spreadsheet module had a macro feature that could execute a series of commands at once.
The database had a capacity of 1200 fields, which could each hold 124 characters. In order to keep down the programs memory requirements, it was necessary to export a database to the spreadsheet module in order to sort it.
It was later ported to the Commodore 128 . [ 3 ]
Commodore Microcomputers stated that " Vizastar packs so much muscle into the 64 that it is hard to believe you're running it on a 64K machine". The reviewer approved of the software's speed, "excellent documentation", and use of windows to show multiple portions of a document. He concluded, "I ... could find no major weaknesses. It seems to be a 64 application program beyond reproach ... Vizastar is an all-star!" [ 4 ] Compute!'s Gazette said "There's nothing quite like Vizastar 128 for the Commodore 128 ... Each application, if available separately, would be a good solid program". The magazine concluded that it was "a gem of a program", and "as close as it comes" to Lotus 1-2-3 for the 128. [ 5 ] Info said of Vizastar 128, "Despite minor quirks this is a powerhouse program with few of the compromises normally associated with integrated programs". The magazine added that if the company could combine it with Vizawrite Classic, "it would become the Commodore 128 Appleworks ". [ 6 ]
This software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vizastar
|
A virtual kernel architecture ( vkernel ) is an operating system virtualisation paradigm where kernel code can be compiled to run in the user space , for example, to ease debugging of various kernel-level components, [ 3 ] [ 4 ] [ 5 ] in addition to general-purpose virtualisation and compartmentalisation of system resources . It is used by DragonFly BSD in its vkernel implementation since DragonFly 1.7, [ 2 ] having been first revealed in September 2006 (18 years ago) ( 2006-09 ) , [ 3 ] [ 6 ] and first released in the stable branch with DragonFly 1.8 in January 2007 (18 years ago) ( 2007-01 ) . [ 1 ] [ 7 ] [ 8 ] [ 9 ]
The long-term goal, in addition to easing kernel development, is to make it easier to support internet-connected computer clusters without compromising local security . [ 3 ] [ 4 ]
Similar concepts exist in other operating systems as well; in Linux, a similar virtualisation concept is known as user-mode Linux ; [ 10 ] [ 7 ] whereas in NetBSD since the summer of 2007, it has been the initial focus of the rump kernel infrastructure. [ 11 ]
The virtual kernel concept is nearly the exact opposite of the unikernel concept — with vkernel , kernel components get to run in userspace to ease kernel development and debugging, supported by a regular operating system kernel; whereas with a unikernel , userspace-level components get to run directly in kernel space for extra performance, supported by baremetal hardware or a hardware virtualisation stack. However, both vkernels and unikernels can be used for similar tasks as well, for example, to self-contain software to a virtualised environment with low overhead. In fact, NetBSD's rump kernel , originally having a focus of running kernel components in userspace, has since shifted into the unikernel space as well (going after the anykernel moniker for supporting both paradigms).
The vkernel concept is different from a FreeBSD jail in that a jail is only meant for resource isolation, and cannot be used to develop and test new kernel functionality in the userland, because each jail is sharing the same kernel. [ 7 ] (DragonFly, however, still has FreeBSD jail support as well. [ 7 ] )
In DragonFly, the vkernel can be thought of as a first-class computer architecture , comparable to i386 or amd64, and, according to Matthew Dillon circa 2007, can be used as a starting point for porting DragonFly BSD to new architectures. [ 12 ]
DragonFly's vkernel is supported by the host kernel through new system calls that help manage virtual memory address space ( vmspace ) — vmspace_create() et al., [ 3 ] [ 9 ] [ 13 ] as well as extensions to several existing system calls like mmap 's madvise — mcontrol . [ 9 ] [ 14 ] [ 15 ]
|
https://en.wikipedia.org/wiki/Vkernel
|
Vladimir Aleksandrovich Andrunakievich ( Russian : Владимир Александрович Андрунакиевич; 3 April 1917 – 22 July 1997) was a Soviet and Moldovan mathematician , known for his work in abstract algebra . He was a doctor of physical and mathematical sciences (1958), academician (1961) and vice-president (1964—1969, 1979—1990) of the Moldavian Soviet Academy of Sciences. Laureate of the State Prize of the Moldavian SSR (1972). [ 1 ]
Andrunakievich was born in Petrograd . He received his Ph.D. from the Moscow State University in 1947 under the supervision of Aleksandr Gennadievich Kurosh and Otto Schmidt . [ 2 ]
|
https://en.wikipedia.org/wiki/Vladimir_Andrunakievich
|
Vladimir Igorevich Arnold (or Arnol'd ; Russian: Влади́мир И́горевич Арно́льд , IPA: [vlɐˈdʲimʲɪr ˈiɡərʲɪvʲɪtɕ ɐrˈnolʲt] ; 12 June 1937 – 3 June 2010) [ 1 ] [ 3 ] [ 4 ] was a Soviet and Russian mathematician. He is best known for the Kolmogorov–Arnold–Moser theorem regarding the stability of integrable systems , and contributed to several areas, including geometrical theory of dynamical systems , algebra , catastrophe theory , topology , real algebraic geometry , symplectic geometry , differential equations , classical mechanics , differential-geometric approach to hydrodynamics , geometric analysis and singularity theory , including posing the ADE classification problem.
His first main result was the solution of Hilbert's thirteenth problem in 1957 at the age of 19. He co-founded three new branches of mathematics : topological Galois theory (with his student Askold Khovanskii ), symplectic topology and KAM theory .
Arnold was also known as a popularizer of mathematics. Through his lectures, seminars, and as the author of several textbooks (such as Mathematical Methods of Classical Mechanics ) and popular mathematics books, he influenced many mathematicians and physicists. [ 5 ] [ 6 ] Many of his books were translated into English. His views on education were particularly opposed to those of Bourbaki .
Vladimir Igorevich Arnold was born on 12 June 1937 in Odesa , Soviet Union (now Odesa, Ukraine ). His father was Igor Vladimirovich Arnold (1900–1948), a mathematician. His mother was Nina Alexandrovna Arnold (1909–1986, née Isakovich), a Jewish art historian. [ 4 ] While a school student, Arnold once asked his father on the reason why the multiplication of two negative numbers yielded a positive number, and his father provided an answer involving the field properties of real numbers and the preservation of the distributive property . Arnold was deeply disappointed with this answer, and developed an aversion to the axiomatic method that lasted through his life. [ 7 ] When Arnold was thirteen, his uncle Nikolai B. Zhitkov, [ 8 ] who was an engineer, told him about calculus and how it could be used to understand some physical phenomena. This contributed to sparking his interest for mathematics, and he started to study by himself the mathematical books his father had left to him, which included some works of Leonhard Euler and Charles Hermite . [ 9 ]
Arnold entered Moscow State University in 1954. [ 10 ] Among his teachers there were A. N. Kolmogorov , I. M. Gelfand , L. S. Pontriagin and Pavel Alexandrov . [ 11 ] While a student of Andrey Kolmogorov at Moscow State University and still a teenager, Arnold showed in 1957 that any continuous function of several variables can be constructed with a finite number of two-variable functions, thereby solving Hilbert's thirteenth problem . [ 12 ] This is the Kolmogorov–Arnold representation theorem .
Arnold obtained his PhD in 1961, with Kolmogorov as advisor. [ 13 ]
After graduating from Moscow State University in 1959, he worked there until 1986 (a professor since 1965), and then at Steklov Mathematical Institute .
He became an academician of the Academy of Sciences of the Soviet Union ( Russian Academy of Science since 1991) in 1990. [ 14 ] Arnold can be said to have initiated the theory of symplectic topology as a distinct discipline. The Arnold conjecture on the number of fixed points of Hamiltonian symplectomorphisms and Lagrangian intersections was also a motivation in the development of Floer homology .
In 1999 he suffered a serious bicycle accident in Paris, resulting in traumatic brain injury . He regained consciousness after a few weeks but had amnesia and for some time could not even recognize his own wife at the hospital. [ 15 ] He went on to make a good recovery. [ 16 ]
Arnold worked at the Steklov Mathematical Institute in Moscow and at Paris Dauphine University up until his death. His PhD students include Rifkat Bogdanov , Alexander Givental , Victor Goryunov , Sabir Gusein-Zade , Emil Horozov , Yulij Ilyashenko , Boris Khesin , Askold Khovanskii , Nikolay Nekhoroshev , Boris Shapiro , Alexander Varchenko , Victor Vassiliev and Vladimir Zakalyukin . [ 2 ]
To his students and colleagues Arnold was known also for his sense of humour. For example, once at his seminar in Moscow, at the beginning of the school year, when he usually was formulating new problems, he said:
There is a general principle that a stupid man can ask such questions to which one hundred wise men would not be able to answer. In accordance with this principle I shall formulate some problems. [ 17 ]
Arnold died of acute pancreatitis [ 18 ] on 3 June 2010 in Paris, nine days before his 73rd birthday. [ 19 ] He was buried on 15 June in Moscow, at the Novodevichy Monastery . [ 20 ]
In a telegram to Arnold's family, Russian President Dmitry Medvedev stated:
The death of Vladimir Arnold, one of the greatest mathematicians of our time, is an irretrievable loss for world science. It is difficult to overestimate the contribution made by academician Arnold to modern mathematics and the prestige of Russian science.
Teaching had a special place in Vladimir Arnold's life and he had great influence as an enlightened mentor who taught several generations of talented scientists.
The memory of Vladimir Arnold will forever remain in the hearts of his colleagues, friends and students, as well as everyone who knew and admired this brilliant man. [ 21 ]
Arnold is well known for his lucid writing style, combining mathematical rigour with physical intuition, and an easy conversational style of teaching and education. His writings present a fresh, often geometric approach to traditional mathematical topics like ordinary differential equations , and his many textbooks have proved influential in the development of new areas of mathematics. The standard criticism about Arnold's pedagogy is that his books "are beautiful treatments of their subjects that are appreciated by experts, but too many details are omitted for students to learn the mathematics required to prove the statements that he so effortlessly justifies." His defense was that his books are meant to teach the subject to "those who truly wish to understand it" (Chicone, 2007). [ 22 ]
Arnold was an outspoken critic of the trend towards high levels of abstraction in mathematics during the middle of the last century. He had very strong opinions on how this approach—which was most popularly implemented by the Bourbaki school in France—initially had a negative impact on French mathematical education , and then later on that of other countries as well. [ 23 ] [ 24 ] He was very concerned about what he saw as the divorce of mathematics from the natural sciences in the 20th century. [ 25 ] Arnold was very interested in the history of mathematics . [ 26 ] In an interview, [ 24 ] he said he had learned much of what he knew about mathematics through the study of Felix Klein 's book Development of Mathematics in the 19th Century —a book he often recommended to his students. [ 27 ] He studied the classics, most notably the works of Huygens , Newton and Poincaré , [ 28 ] and many times he reported to have found in their works ideas that had not been explored yet. [ 29 ]
Arnold worked on dynamical systems theory , catastrophe theory , topology , algebraic geometry , symplectic geometry , differential equations , classical mechanics , hydrodynamics and singularity theory . [ 5 ] Michèle Audin described him as "a geometer in the widest possible sense of the word" and said that "he was very fast to make connections between different fields". [ 30 ]
The problem is the following question: can every continuous function of three variables be expressed as a composition of finitely many continuous functions of two variables? The affirmative answer to this general question was given in 1957 by Vladimir Arnold, then only nineteen years old and a student of Andrey Kolmogorov . Kolmogorov had shown in the previous year that any function of several variables can be constructed with a finite number of three-variable functions. Arnold then expanded on this work to show that only two-variable functions were in fact required, thus answering the Hilbert's question when posed for the class of continuous functions. [ 31 ]
Moser and Arnold expanded the ideas of Kolmogorov (who was inspired by questions of Poincaré ) and gave rise to what is now known as Kolmogorov–Arnold–Moser theorem (or "KAM theory"), which concerns the persistence of some quasi-periodic motions (nearly integrable Hamiltonian systems ) when they are perturbed. KAM theory shows that, despite the perturbations, such systems can be stable over an infinite period of time, and specifies what the conditions for this are. [ 32 ]
In 1964, Arnold introduced the Arnold web , the first example of a stochastic web. [ 33 ] [ 34 ]
In 1965, Arnold attended René Thom 's seminar on catastrophe theory . He later said of it: "I am deeply indebted to Thom, whose singularity seminar at the Institut des Hautes Etudes Scientifiques , which I frequented throughout the year 1965, profoundly changed my mathematical universe." [ 35 ] After this event, singularity theory became one of the major interests of Arnold and his students. [ 36 ] Among his most famous results in this area is his classification of simple singularities, contained in his paper "Normal forms of functions near degenerate critical points, the Weyl groups of A k ,D k ,E k and Lagrangian singularities". [ 37 ] [ 38 ] [ 39 ]
In 1966, Arnold published " Sur la géométrie différentielle des groupes de Lie de dimension infinie et ses applications à l'hydrodynamique des fluides parfaits ", in which he presented a common geometric interpretation for both the Euler's equations for rotating rigid bodies and the Euler's equations of fluid dynamics , this effectively linked topics previously thought to be unrelated, and enabled mathematical solutions to many questions related to fluid flows and their turbulence. [ 40 ] [ 41 ] [ 42 ]
In the year 1971, Arnold published "On the arrangement of ovals of real plane algebraic curves, involutions of four-dimensional smooth manifolds, and the arithmetic of integral quadratic forms", [ 43 ] which gave new life to real algebraic geometry . In it, he made major advances in the direction of a solution to Gudkov's conjecture , by finding a connection between it and four-dimensional topology . [ 44 ] The conjecture was to be later fully solved by V. A. Rokhlin building on Arnold's work. [ 45 ] [ 46 ]
The Arnold conjecture , linking the number of fixed points of Hamiltonian symplectomorphisms and the topology of the subjacent manifolds , was the motivating source of many of the pioneer studies in symplectic topology. [ 47 ] [ 48 ]
According to Victor Vassiliev , Arnold "worked comparatively little on topology for topology's sake." And he was rather motivated by problems on other areas of mathematics where topology could be of use. His contributions include the invention of a topological form of the Abel–Ruffini theorem and the initial development of some of the consequent ideas, a work which resulted in the creation of the field of topological Galois theory in the 1960s. [ 49 ] [ 50 ]
According to Marcel Berger , Arnold revolutionized plane curves theory. [ 51 ] He developed the theory of smooth closed plane curves in the 1990s. [ 52 ] Among his contributions are the introduction of the three Arnold invariants of plane curves: J + , J − and St . [ 53 ] [ 54 ]
Arnold conjectured the existence of the gömböc , a body with just one stable and one unstable point of equilibrium when resting on a flat surface. [ 55 ] [ 56 ]
Arnold generalized the results of Isaac Newton , Pierre-Simon Laplace , and James Ivory on the shell theorem , showing it to be applicable to algebraic hypersurfaces. [ 57 ]
The minor planet 10031 Vladarnolda was named after him in 1981 by Lyudmila Georgievna Karachkina . [ 71 ]
The Arnold Mathematical Journal , published for the first time in 2015, is named after him. [ 72 ]
The Arnold Fellowships, of the London Institute are named after him. [ 73 ] [ 74 ]
He was a plenary speaker at both the 1974 and 1983 International Congress of Mathematicians in Vancouver and Warsaw , respectively. [ 75 ]
Even though Arnold was nominated for the 1974 Fields Medal , one of the highest honours a mathematician could receive, interference from the Soviet government led to it being withdrawn. Arnold's public opposition to the persecution of dissidents had led him into direct conflict with influential Soviet officials, and he suffered persecution himself, including not being allowed to leave the Soviet Union during most of the 1970s and 1980s. [ 76 ] [ 77 ]
|
https://en.wikipedia.org/wiki/Vladimir_Arnold
|
Vladimir Iosifovich Levenshtein (Russian: Влади́мир Ио́сифович Левенште́йн , IPA: [vlɐˈdʲimʲɪr ɨˈosʲɪfəvʲɪtɕ lʲɪvʲɪnˈʂtʲejn] ⓘ ; 20 May 1935 – 6 September 2017) was a Russian and Soviet scientist who did research in information theory , error-correcting codes , and combinatorial design . [ 1 ] Among other contributions, he is known for the Levenshtein distance and a Levenshtein algorithm, which he developed in 1965.
He graduated from the Department of Mathematics and Mechanics of Moscow State University in 1958 and worked at the Keldysh Institute of Applied Mathematics in Moscow ever since. He was a fellow of the IEEE Information Theory Society.
He received the IEEE Richard W. Hamming Medal in 2006, for "contributions to the theory of error-correcting codes and information theory, including the Levenshtein distance". [ 2 ]
Levenshtein graduated from Moscow State University in 1958, where he studied in the faculty of Mechanics and Mathematics. After graduation, he worked at the M.V Keldysh Institute of Applied Mathematics.
|
https://en.wikipedia.org/wiki/Vladimir_Levenshtein
|
In plasma physics , the Vlasov equation is a differential equation describing time evolution of the distribution function of collisionless plasma consisting of charged particles with long-range interaction, such as the Coulomb interaction. The equation was first suggested for the description of plasma by Anatoly Vlasov in 1938 [ 1 ] [ 2 ] and later discussed by him in detail in a monograph. [ 3 ] The Vlasov equation, combined with Landau kinetic equation describe collisional plasma.
First, Vlasov argues that the standard kinetic approach based on the Boltzmann equation has difficulties when applied to a description of the plasma with long-range Coulomb interaction . He mentions the following problems arising when applying the kinetic theory based on pair collisions to plasma dynamics:
Vlasov suggests that these difficulties originate from the long-range character of Coulomb interaction. He starts with the collisionless Boltzmann equation (sometimes called the Vlasov equation, anachronistically in this context), in generalized coordinates : d d t f ( r , p , t ) = 0 , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}f(\mathbf {r} ,\mathbf {p} ,t)=0,}
explicitly a PDE : ∂ f ∂ t + d r d t ⋅ ∂ f ∂ r + d p d t ⋅ ∂ f ∂ p = 0 , {\displaystyle {\frac {\partial f}{\partial t}}+{\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}\cdot {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}\cdot {\frac {\partial f}{\partial \mathbf {p} }}=0,} and adapted it to the case of a plasma, leading to the systems of equations shown below. [ 5 ] Here f is a general distribution function of particles with momentum p at coordinates r and given time t . Note that the term d p d t {\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}} is the force F acting on the particle.
Instead of collision-based kinetic description for interaction of charged particles in plasma, Vlasov utilizes a self-consistent collective field created by the charged plasma particles. Such a description uses distribution functions f e ( r , p , t ) {\displaystyle f_{e}(\mathbf {r} ,\mathbf {p} ,t)} and f i ( r , p , t ) {\displaystyle f_{i}(\mathbf {r} ,\mathbf {p} ,t)} for electrons and (positive) plasma ions . The distribution function f α ( r , p , t ) {\displaystyle f_{\alpha }(\mathbf {r} ,\mathbf {p} ,t)} for species α describes the number of particles of the species α having approximately the momentum p {\displaystyle \mathbf {p} } near the position r {\displaystyle \mathbf {r} } at time t . Instead of the Boltzmann equation, the following system of equations was proposed for description of charged components of plasma (electrons and positive ions): ∂ f e ∂ t + v e ⋅ ∇ f e − e ( E + v e c × B ) ⋅ ∂ f e ∂ p = 0 ∂ f i ∂ t + v i ⋅ ∇ f i + Z i e ( E + v i c × B ) ⋅ ∂ f i ∂ p = 0 {\displaystyle {\begin{aligned}{\frac {\partial f_{e}}{\partial t}}+\mathbf {v} _{e}\cdot \nabla f_{e}-\;\;e\left(\mathbf {E} +{\frac {\mathbf {v} _{e}}{c}}\times \mathbf {B} \right)\cdot {\frac {\partial f_{e}}{\partial \mathbf {p} }}&=0\\{\frac {\partial f_{i}}{\partial t}}+\mathbf {v} _{i}\cdot \nabla f_{i}+Z_{i}e\left(\mathbf {E} +{\frac {\mathbf {v} _{i}}{c}}\times \mathbf {B} \right)\cdot {\frac {\partial f_{i}}{\partial \mathbf {p} }}&=0\end{aligned}}}
∇ × B = 4 π c j + 1 c ∂ E ∂ t , ∇ ⋅ B = 0 , ∇ × E = − 1 c ∂ B ∂ t , ∇ ⋅ E = 4 π ρ , {\displaystyle {\begin{aligned}\nabla \times \mathbf {B} &={\frac {4\pi }{c}}\mathbf {j} +{\frac {1}{c}}{\frac {\partial \mathbf {E} }{\partial t}},&\nabla \cdot \mathbf {B} &=0,\\\nabla \times \mathbf {E} &=-{\frac {1}{c}}{\frac {\partial \mathbf {B} }{\partial t}},&\nabla \cdot \mathbf {E} &=4\pi \rho ,\end{aligned}}}
ρ = e ∫ ( Z i f i − f e ) d 3 p , j = e ∫ ( Z i f i v i − f e v e ) d 3 p , v α = p / m α 1 + p 2 / ( m α c ) 2 {\displaystyle {\begin{aligned}\rho &=e\int \left(Z_{i}f_{i}-f_{e}\right)\mathrm {d} ^{3}\mathbf {p} ,\\\mathbf {j} &=e\int \left(Z_{i}f_{i}\mathbf {v} _{i}-f_{e}\mathbf {v} _{e}\right)\mathrm {d} ^{3}\mathbf {p} ,\\\mathbf {v} _{\alpha }&={\frac {\mathbf {p} /m_{\alpha }}{\sqrt {1+p^{2}/\left(m_{\alpha }c\right)^{2}}}}\end{aligned}}}
Here e is the elementary charge ( e > 0 {\displaystyle e>0} ), c is the speed of light , Z i e is the charge of the ions, m i is the mass of the ion, E ( r , t ) {\displaystyle \mathbf {E} (\mathbf {r} ,t)} and B ( r , t ) {\displaystyle \mathbf {B} (\mathbf {r} ,t)} represent collective self-consistent electromagnetic field created in the point r {\displaystyle \mathbf {r} } at time moment t by all plasma particles. The essential difference of this system of equations from equations for particles in an external electromagnetic field is that the self-consistent electromagnetic field depends in a complex way on the distribution functions of electrons and ions f e ( r , p , t ) {\displaystyle f_{e}(\mathbf {r} ,\mathbf {p} ,t)} and f i ( r , p , t ) {\displaystyle f_{i}(\mathbf {r} ,\mathbf {p} ,t)} .
The Vlasov–Poisson equations are an approximation of the Vlasov–Maxwell equations in the non-relativistic zero-magnetic field limit: ∂ f α ∂ t + v α ⋅ ∂ f α ∂ x + q α E m α ⋅ ∂ f α ∂ v = 0 , {\displaystyle {\frac {\partial f_{\alpha }}{\partial t}}+\mathbf {v} _{\alpha }\cdot {\frac {\partial f_{\alpha }}{\partial \mathbf {x} }}+{\frac {q_{\alpha }\mathbf {E} }{m_{\alpha }}}\cdot {\frac {\partial f_{\alpha }}{\partial \mathbf {v} }}=0,}
and Poisson's equation for self-consistent electric field: ∇ 2 ϕ + ρ ε = 0. {\displaystyle \nabla ^{2}\phi +{\frac {\rho }{\varepsilon }}=0.}
Here q α is the particle's electric charge, m α is the particle's mass, E ( x , t ) {\displaystyle \mathbf {E} (\mathbf {x} ,t)} is the self-consistent electric field , ϕ ( x , t ) {\displaystyle \phi (\mathbf {x} ,t)} the self-consistent electric potential , ρ is the electric charge density, and ε {\displaystyle \varepsilon } is the electric permitivity .
Vlasov–Poisson equations are used to describe various phenomena in plasma, in particular Landau damping and the distributions in a double layer plasma, where they are necessarily strongly non- Maxwellian , and therefore inaccessible to fluid models.
In fluid descriptions of plasmas (see plasma modeling and magnetohydrodynamics (MHD)) one does not consider the velocity distribution. This is achieved by replacing f ( r , v , t ) {\displaystyle f(\mathbf {r} ,\mathbf {v} ,t)} with plasma moments such as number density n , flow velocity u and pressure p . [ 6 ] They are named plasma moments because the n -th moment of f {\displaystyle f} can be found by integrating v n f {\displaystyle v^{n}f} over velocity. These variables are only functions of position and time, which means that some information is lost. In multifluid theory, the different particle species are treated as different fluids with different pressures, densities and flow velocities. The equations governing the plasma moments are called the moment or fluid equations.
Below the two most used moment equations are presented (in SI units ). Deriving the moment equations from the Vlasov equation requires no assumptions about the distribution function.
The continuity equation describes how the density changes with time. It can be found by integration of the Vlasov equation over the entire velocity space. ∫ d f d t d 3 v = ∫ ( ∂ f ∂ t + ( v ⋅ ∇ r ) f + ( a ⋅ ∇ v ) f ) d 3 v = 0 {\displaystyle \int {\frac {\mathrm {d} f}{\mathrm {d} t}}\mathrm {d} ^{3}v=\int \left({\frac {\partial f}{\partial t}}+(\mathbf {v} \cdot \nabla _{r})f+(\mathbf {a} \cdot \nabla _{v})f\right)\mathrm {d} ^{3}v=0}
After some calculations, one ends up with ∂ n ∂ t + ∇ ⋅ ( n u ) = 0. {\displaystyle {\frac {\partial n}{\partial t}}+\nabla \cdot (n\mathbf {u} )=0.}
The number density n , and the momentum density n u , are zeroth and first order moments: n = ∫ f d 3 v {\displaystyle n=\int f\,\mathrm {d^{3}} v} n u = ∫ v f d 3 v {\displaystyle n\mathbf {u} =\int \mathbf {v} f\,\mathrm {d} ^{3}v}
The rate of change of momentum of a particle is given by the Lorentz equation: m d v d t = q ( E + v × B ) {\displaystyle m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}=q(\mathbf {E} +\mathbf {v} \times \mathbf {B} )}
By using this equation and the Vlasov Equation, the momentum equation for each fluid becomes m n D D t u = − ∇ ⋅ P + q n E + q n u × B , {\displaystyle mn{\frac {\mathrm {D} }{\mathrm {D} t}}\mathbf {u} =-\nabla \cdot {\mathcal {P}}+qn\mathbf {E} +qn\mathbf {u} \times \mathbf {B} ,} where P {\displaystyle {\mathcal {P}}} is the pressure tensor. The material derivative is D D t = ∂ ∂ t + u ⋅ ∇ . {\displaystyle {\frac {\mathrm {D} }{\mathrm {D} t}}={\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla .}
The pressure tensor is defined as the particle mass times the covariance matrix of the velocity: p i j = m ∫ ( v i − u i ) ( v j − u j ) f d 3 v . {\displaystyle p_{ij}=m\int (v_{i}-u_{i})(v_{j}-u_{j})f\mathrm {d} ^{3}v.}
As for ideal MHD , the plasma can be considered as tied to the magnetic field lines when certain conditions are fulfilled. One often says that the magnetic field lines are frozen into the plasma. The frozen-in conditions can be derived from Vlasov equation.
We introduce the scales T , L , and V for time, distance and speed respectively. They represent magnitudes of the different parameters which give large changes in f {\displaystyle f} . By large we mean that ∂ f ∂ t T ∼ f | ∂ f ∂ r | L ∼ f | ∂ f ∂ v | V ∼ f . {\displaystyle {\frac {\partial f}{\partial t}}T\sim f\quad \left|{\frac {\partial f}{\partial \mathbf {r} }}\right|L\sim f\quad \left|{\frac {\partial f}{\partial \mathbf {v} }}\right|V\sim f.}
We then write t ′ = t T , r ′ = r L , v ′ = v V . {\displaystyle t'={\frac {t}{T}},\quad \mathbf {r} '={\frac {\mathbf {r} }{L}},\quad \mathbf {v} '={\frac {\mathbf {v} }{V}}.}
Vlasov equation can now be written 1 T ∂ f ∂ t ′ + V L v ′ ⋅ ∂ f ∂ r ′ + q m V ( E + V v ′ × B ) ⋅ ∂ f ∂ v ′ = 0. {\displaystyle {\frac {1}{T}}{\frac {\partial f}{\partial t'}}+{\frac {V}{L}}\mathbf {v} '\cdot {\frac {\partial f}{\partial \mathbf {r} '}}+{\frac {q}{mV}}(\mathbf {E} +V\mathbf {v} '\times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} '}}=0.}
So far no approximations have been done. To be able to proceed we set V = R ω g {\displaystyle V=R\omega _{g}} , where ω g = q B / m {\displaystyle \omega _{g}=qB/m} is the gyro frequency and R is the gyroradius . By dividing by ω g , we get 1 ω g T ∂ f ∂ t ′ + R L v ′ ⋅ ∂ f ∂ r ′ + ( E V B + v ′ × B B ) ⋅ ∂ f ∂ v ′ = 0 {\displaystyle {\frac {1}{\omega _{g}T}}{\frac {\partial f}{\partial t'}}+{\frac {R}{L}}\mathbf {v} '\cdot {\frac {\partial f}{\partial \mathbf {r} '}}+\left({\frac {\mathbf {E} }{VB}}+\mathbf {v} '\times {\frac {\mathbf {B} }{B}}\right)\cdot {\frac {\partial f}{\partial \mathbf {v} '}}=0}
If 1 / ω g ≪ T {\displaystyle 1/\omega _{g}\ll T} and R ≪ L {\displaystyle R\ll L} , the two first terms will be much less than f {\displaystyle f} since ∂ f / ∂ t ′ ∼ f , v ′ ≲ 1 {\displaystyle \partial f/\partial t'\sim f,v'\lesssim 1} and ∂ f / ∂ r ′ ∼ f {\displaystyle \partial f/\partial \mathbf {r} '\sim f} due to the definitions of T , L , and V above. Since the last term is of the order of f {\displaystyle f} , we can neglect the two first terms and write ( E V B + v ′ × B B ) ⋅ ∂ f ∂ v ′ ≈ 0 ⇒ ( E + v × B ) ⋅ ∂ f ∂ v ≈ 0 {\displaystyle \left({\frac {\mathbf {E} }{VB}}+\mathbf {v} '\times {\frac {\mathbf {B} }{B}}\right)\cdot {\frac {\partial f}{\partial \mathbf {v} '}}\approx 0\Rightarrow (\mathbf {E} +\mathbf {v} \times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} }}\approx 0}
This equation can be decomposed into a field aligned and a perpendicular part: E ∥ ∂ f ∂ v ∥ + ( E ⊥ + v × B ) ⋅ ∂ f ∂ v ⊥ ≈ 0 {\displaystyle \mathbf {E} _{\parallel }{\frac {\partial f}{\partial \mathbf {v} _{\parallel }}}+(\mathbf {E} _{\perp }+\mathbf {v} \times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} _{\perp }}}\approx 0}
The next step is to write v = v 0 + Δ v {\displaystyle \mathbf {v} =\mathbf {v} _{0}+\Delta \mathbf {v} } , where v 0 × B = − E ⊥ {\displaystyle \mathbf {v} _{0}\times \mathbf {B} =-\mathbf {E} _{\perp }}
It will soon be clear why this is done. With this substitution, we get E ∥ ∂ f ∂ v ∥ + ( Δ v ⊥ × B ) ⋅ ∂ f ∂ v ⊥ ≈ 0 {\displaystyle \mathbf {E} _{\parallel }{\frac {\partial f}{\partial \mathbf {v} _{\parallel }}}+(\Delta \mathbf {v} _{\perp }\times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} _{\perp }}}\approx 0}
If the parallel electric field is small, ( Δ v ⊥ × B ) ⋅ ∂ f ∂ v ⊥ ≈ 0 {\displaystyle (\Delta \mathbf {v} _{\perp }\times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} _{\perp }}}\approx 0}
This equation means that the distribution is gyrotropic. [ 7 ] The mean velocity of a gyrotropic distribution is zero. Hence, v 0 {\displaystyle \mathbf {v} _{0}} is identical with the mean velocity, u , and we have E + u × B ≈ 0 {\displaystyle \mathbf {E} +\mathbf {u} \times \mathbf {B} \approx 0}
To summarize, the gyro period and the gyro radius must be much smaller than the typical times and lengths which give large changes in the distribution function. The gyro radius is often estimated by replacing V with the thermal velocity or the Alfvén velocity . In the latter case R is often called the inertial length. The frozen-in conditions must be evaluated for each particle species separately. Because electrons have much smaller gyro period and gyro radius than ions, the frozen-in conditions will more often be satisfied.
|
https://en.wikipedia.org/wiki/Vlasov_equation
|
Vlastimil Dlab (born 5 August 1932) is a Czech -born Canadian mathematician who has worked in Czechoslovakia , Sudan , Australia and especially Canada where he founded and led an influential department of modern mathematics. [ 1 ]
Dlab was born on August 5, 1932, in Bzí, Czechoslovakia , a historical village whose territory currently belongs to Železný Brod . He studied at Charles University in Prague , and worked at the Czechoslovak Academy of Sciences for a while in 1956. At Charles University, he was gradually promoted to associate professor. However. Between 1954 and 1964, he was doing university research in Khartoum in Sudan . Between 1964 and 1965 he returned Prague but the Institute of Advanced Studies in Canberra , Australia attracted him between 1965 and 1968.
After the 1968 Warsaw Pact invasion of Czechoslovakia , he wasn't quite embraced with open arms. So in 1971, he left for Ottawa , Canada where he founded and led a department of modern mathematics at Carleton University that has significantly influenced the world of algebra , probability , and statistics .
Because his father was ill in the early 1980s, Dlab – as an alien – was allowed to visit Czechoslovakia and he restored his relationship with Charles University . In the late 1980s, he taught some courses again there, and he regained full professorship in 1992.
Dlab was a postdoctoral student of renowned Czech mathematician Eduard Čech .
While in Canada, Dlab worked as the editor-of-chief of mathematical journals and chairman of assorted organizations and institutions. In 1977, he was elected a fellow of the Royal Society of Canada .
Claus Michael Ringel was the co-author of some of the most famous academic works by Dlab, such as the 1976 book Indecomposable representations of graphs and algebras . Dlab helped to educate numerous students of mathematics who became successful by themselves. [ 2 ]
In recent years, Dlab was very active in efforts to improve the mathematics education. In the Czech Republic, he's been often quoted as an authority that is skeptical towards modern methods to teach, e.g. the method of Milan Hejný. He emphasizes the key role played by the quality of teachers.
|
https://en.wikipedia.org/wiki/Vlastimil_Dlab
|
vmstat ( v irtual m emory stat istics ) is a computer system monitoring tool that collects and displays summary information about operating system memory, processes, interrupts, paging and block I/O . Users of vmstat can specify a sampling interval which permits observing system activity in near-real time.
The vmstat tool is available on most Unix and Unix-like operating systems, such as FreeBSD , Linux or Solaris .
The syntax and output of vmstat often differs slightly between different operating systems.
In the above example the tool reports every two seconds for six iterations.
We can get the customized or required outputs by using various options with the vmstat command.
This software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vmstat
|
VoIP vulnerabilities are weaknesses in the VoIP protocol or its implementations that expose users to privacy violations and other problems. VoIP is a group of technologies that enable voice calls online. VoIP contains similar vulnerabilities to those of other internet use.
Risks are not usually mentioned to potential customers. [ 1 ] VoIP provides no specific protections against fraud and illicit practices. [ 2 ]
Unencrypted connections are vulnerable to security breaches. Hackers /trackers can eavesdrop on conversations and extract valuable data through microphones. [ 3 ] [ 4 ] [ 5 ]
Attacks on the user network or internet provider can disrupt or destroy the connection. [ 6 ] Since VoIP requires an internet connection, direct attacks on the internet connection, or provider, can be effective. Such attacks target office telephony. Mobile applications that do not rely on an internet connection to make calls [ 7 ] are immune to such attacks. [ why? ]
VoIP phones are smart devices that need to be configured. In some cases, Chinese manufacturers [ citation needed ] are using default passwords that lead to vulnerabilities. [ 8 ]
While VoIP is relatively secure [ citation needed ] , it still needs a source of internet, which is often a Wi-Fi network, making VoIP subject to Wi-Fi vulnerabilities [ 9 ] [ further explanation needed ]
Since VoIP runs over an internet connection, via wired, Wi-Fi or 4G , it is susceptible to packet loss which affects the ability to make and receive calls or makes the calls hard to hear. The susceptibility is due to the real time nature of the communication. Packet loss is the biggest reason for VoIP support calls. [ 10 ]
When VoIP was first setup, a setting called SIP ALG was added to routers to prevent VoIP Packets being modified. However, on more modern VoIP systems, the SIP ALG router setting causes routing issues with VoIP Packets causing calls to drop. Routers are usually shipped with SIP ALG turned on. [ 11 ]
VoIP is vulnerable to spam, known as SPIT ( Spam over Internet Telephony ) because it relies on the open internet, which is less regulated. Using the extensions provided by VoIP PBX capabilities, the spammer can harass their target from different numbers. [ 12 ] The process can be automated and can fill the target's voice mail with notifications. [ 12 ] The spammer can make calls often enough to block the target from getting important calls. [ 13 ]
VoIP users can change their Caller ID if they have admin rights on the VoIP server. Anyone who resells VoIP or manages their own VoIP server can allocate any phone number as an outgoing number. This is commonly used for genuinue reasons when a customer is porting a number, so they can use their number of a new plaform while the port takes place. But it can be used maliciously to mask any number. (a.k.a. Caller ID spoofing ) [ how? ] , allowing a caller to pose as a relative or colleague in order to extract information, money or benefits from the target. [ 14 ] [ citation not found ]
|
https://en.wikipedia.org/wiki/VoIP_vulnerabilities
|
Vocalink is a payment systems company headquartered in the United Kingdom , created in 2007 from the merger between Voca and LINK . [ 1 ] It designs, builds and operates the UK payments infrastructure, which underpins the provision of the Bacs payment system and the UK ATM LINK switching platform covering 65,000 ATMs and the UK Faster Payments systems. [ 2 ]
Vocalink processes over 90% of UK salaries, more than 70% of household bills and 98% of state benefits. [ 3 ] In 2013 the company processed over 10.5 billion UK payments with a value of over £5 trillion. [ 4 ] In July 2016 MasterCard purchased a 92% stake in the company, with the remainder to be held by UK banks for a period of three years. [ 5 ]
In 1968, the Joint Stock Banks Clearing Committee chaired by Dennis Gladwell, set-up the Inter-Bank Computer Bureau to modernise the existing paper-based standing order system. Secure electronic funds transfer between banks was introduced from later that year, [ 6 ] significantly reducing both the processing time and human error associated with paper-based transactions, particularly bulk payments. [ 7 ] In 1971 the company adopted the name "Bankers Automated Clearing Services Limited", soon shortened to Bacs , which was formally adopted as the company's name in 1985. [ 8 ]
In November 1998 HM Treasury commissioned a review of competition within the UK banking sector, to be chaired by Sir Don Cruickshank . Reporting in March 2000, The Cruickshank Report of competition within the UK banking sector recommended that: [ 9 ]
In response, on 1 December 2003, Bacs Payment Schemes Ltd (BPSL) was split from Bacs Limited. BPSL was established as a not for profit company with members from the banking industry, the purpose of which is to promote the use of automated payment schemes and govern the rules of the Bacs scheme. Bacs Limited owns the infrastructure to run the Bacs scheme. Bacs Limited was permitted to continue to use the Bacs name for one year, becoming Voca Limited on 12 October 2004. [ 11 ]
LINK Interchange Network Ltd was formed in 1985 to create interoperability between ATMs across the United Kingdom. It became an international network in the 1990s through connection with the MasterCard and Visa networks. In October 2002 LINK launched the first service that used ATMs as a retail channel, enabling the facility to top-up a mobile phone at an ATM, bringing banking and mobile phones together for the consumer. [ 12 ]
In 2005 a joint proposal from Voca and LINK was selected to deliver the payment-processing infrastructure for the Faster Payments Service , a near real-time interbank transfer for internet and telephone banking. [ 13 ]
After Voca's bulk processing was brought together with LINK's real-time payment switching, the companies agreed to merge on 2 July 2007 to form VocaLink . [ 14 ] Since its launch in 2008, over 3 billion real-time faster payment transactions have been processed by VocaLink. [ 15 ]
Bacs provides two payment products to consumers:
VocaLink provides the underlying BacsTEL-IP infrastructure for Bacs, [ 17 ] which processes over 5.6 billion of clearing and settlement of automated payments a year, [ 18 ] with a value of £4.2 trillion.
VocaLink provides the switching infrastructure behind LINK, the busiest ATM switching system in the world which switches over 3.5 billion card-initiated transactions from 130 million enabled cards annually. [ 19 ]
VocaLink designed, built, and operates the infrastructure for the Faster Payments Service on behalf of the Faster Payments Scheme. Running in parallel with Bacs and CHAPS , launched in 2008 it enables interbank transfers in real-time. Since 2008, over three billion Faster Payments transactions have been securely processed by VocaLink. [ 20 ]
VocaLink has developed Zapp, a function that will reside within mobile banking apps to allow users to make real-time payments to retailers when shopping online or in-store. The service will be open to all financial institutions, merchants, acquirers, and consumers when it launches in 2015. [ 21 ] [ needs update ]
A Zapp payment works through secure digital 'tokens', which means that customers don't reveal any of their financial details to retailers. This also means that merchants do not need to store card details. [ 22 ] [ 23 ]
Zapp has been developed by VocaLink who operate the UK payments infrastructure. VocaLink processes over 90% of UK salaries, more than 70% of household bills and 98% of state benefits. In 2013 VocaLink processed over 10 billion transactions with a value of over £5 trillion. [ citation needed ]
Zapp is backed by four UK high street banks, meaning 18m consumer accounts are potentially Zapp-enabled: HSBC , Nationwide , Metro Bank , Santander . [ citation needed ]
On 12 July 2010 through EAPS , LINK opened all UK cash machines to the similar German Girocard scheme operated by Zentraler Kreditausschuss . [ 24 ] Similarly the VocaLink agreement with the Pulse network allows Discover card and Diners Club International cardholders to use LINK ATMs. [ 25 ] VocaLink also provides a gateway service to Visa , MasterCard and China Union Pay .
Between 2007 and 2012, VocaLink operated a Single Euro Payments Area clearing and settlement mechanism, participating as a member of the European Automated Clearing House Association. The service was discontinued as a consequence of the low participation of VocaLink's shareholder UK banks into the platform. [ 26 ]
In May 2008, VocaLink signed a deal with the Swedish Automated Clearing House Bankgirot , to outsource part of the Swedish payments system. This was the first time that the processing of a national payments scheme has been transferred to a non-domestic player. [ 27 ]
In March 2014, in partnership with BCS Information Systems, VocaLink launched an immediate payments service, Fast And Secure Transfers (FAST) in Singapore , enabling fourteen Singaporean banks to offer the ability to transfer funds between bank accounts in real time. [ 28 ]
In July 2016, Mastercard announced that the company would be acquiring Vocalink. [ 29 ] [ 5 ] The acquisition was completed in May 2017 for a reported $920 million. [ 30 ] [ 31 ]
|
https://en.wikipedia.org/wiki/Vocalink
|
A vocoder ( / ˈ v oʊ k oʊ d ər / , a portmanteau of vo ice and en coder ) is a category of speech coding that analyzes and synthesizes the human voice signal for audio data compression , multiplexing , voice encryption or voice transformation.
The vocoder was invented in 1938 by Homer Dudley at Bell Labs as a means of synthesizing human speech. [ 1 ] This work was developed into the channel vocoder which was used as a voice codec for telecommunications for speech coding to conserve bandwidth in transmission.
By encrypting the control signals, voice transmission can be secured against interception. Its primary use in this fashion is for secure radio communication. The advantage of this method of encryption is that none of the original signal is sent, only envelopes of the bandpass filters. The receiving unit needs to be set up in the same filter configuration to re-synthesize a version of the original signal spectrum.
The vocoder has also been used extensively as an electronic musical instrument . The decoder portion of the vocoder, called a voder , can be used independently for speech synthesis.
The human voice consists of sounds generated by the periodic opening and closing of the glottis by the vocal cords , which produces an acoustic waveform with many harmonics . This initial sound is then filtered by movements in the nose, mouth and throat (a complicated resonant piping system known as the vocal tract ) to produce fluctuations in harmonic content ( formants ) in a controlled way, creating the wide variety of sounds used in speech. There is another set of sounds, known as the unvoiced and plosive sounds, which are created or modified by a variety of sound generating disruptions of airflow occurring in the vocal tract .
The vocoder analyzes speech by measuring how its spectral energy distribution characteristics fluctuate across time. This analysis results in a set of temporally parallel envelope signals, each representing the individual frequency band amplitudes of the user's speech. Put another way, the voice signal is divided into a number of frequency bands (the larger this number, the more accurate the analysis) and the level of signal present at each frequency band, occurring simultaneously, measured by an envelope follower , represents the spectral energy distribution across time. This set of envelope amplitude signals is called the "modulator" .
To recreate speech, the vocoder reverses the analysis process, variably filtering an initial broadband noise (referred to alternately as the "source" or "carrier"), by passing it through a set of band-pass filters , whose individual envelope amplitude levels are controlled, in real time, by the set of analyzed envelope amplitude signals from the modulator.
The digital encoding process involves a periodic analysis of each of the modulator's multiband set of filter envelope amplitudes. This analysis results in a set of digital pulse code modulation stream readings. Then the pulse code modulation stream outputs of each band are transmitted to a decoder. The decoder applies the pulse code modulations as control signals to corresponding amplifiers of the output filter channels.
Information about the fundamental frequency of the initial voice signal (as distinct from its spectral characteristic) is discarded; it was not important to preserve this for the vocoder's original use as an encryption aid. It is this dehumanizing aspect of the vocoding process that has made it useful in creating special voice effects in popular music and audio entertainment.
Instead of a point-by-point recreation of the waveform, the vocoder process sends only the parameters of the vocal model over the communication link. Since the parameters change slowly compared to the original speech waveform, the bandwidth required to transmit speech can be reduced. This allows more speech channels to utilize a given communication channel , such as a radio channel or a submarine cable .
Analog vocoders typically analyze an incoming signal by splitting the signal into multiple tuned frequency bands or ranges. To reconstruct the signal, a carrier signal is sent through a series of these tuned band-pass filters. In the example of a typical robot voice the carrier is noise or a sawtooth waveform . There are usually between 8 and 20 bands.
The amplitude of the modulator for each of the individual analysis bands generates a voltage that is used to control amplifiers for each of the corresponding carrier bands. The result is that frequency components of the modulating signal are mapped onto the carrier signal as discrete amplitude changes in each of the frequency bands.
Often there is an unvoiced band or sibilance channel. This is for frequencies that are outside the analysis bands for typical speech but are still important in speech. Examples are words that start with the letters s , f , ch or any other sibilant sound. Using this band produces recognizable speech, although somewhat mechanical sounding. Vocoders often include a second system for generating unvoiced sounds, using a noise generator instead of the fundamental frequency. This is mixed with the carrier output to increase clarity.
In the channel vocoder algorithm, among the two components of an analytic signal , considering only the amplitude component and simply ignoring the phase component tends to result in an unclear voice; on methods for rectifying this, see phase vocoder .
The development of a vocoder was started in 1928 by Bell Labs engineer Homer Dudley , [ 5 ] who was granted patents for it on March 21, 1939, [ 6 ] and Nov 16, 1937. [ 7 ]
To demonstrate the speech synthesis ability of its decoder section, the voder (voice operating demonstrator) [ 8 ] was introduced to the public at the AT&T building at the 1939–1940 New York World's Fair. [ 9 ] The voder consisted of an electronic oscillator – a sound source of pitched tone – and noise generator for hiss , a 10-band resonator filters with variable-gain amplifiers as a vocal tract , and the manual controllers including a set of pressure-sensitive keys for filter control, and a foot pedal for pitch control of tone. [ 10 ] The filters controlled by keys convert the tone and the hiss into vowels , consonants , and inflections . This was a complex machine to operate, but a skilled operator could produce recognizable speech. [ 9 ] [ media 1 ]
Dudley's vocoder was used in the SIGSALY system, which was built by Bell Labs engineers in 1943. SIGSALY was used for encrypted voice communications during World War II . The KO-6 voice coder was released in 1949 in limited quantities; it was a close approximation to the SIGSALY at 1200 bit/s . In 1953, KY-9 THESEUS [ 11 ] 1650 bit/s voice coder used solid-state logic to reduce the weight to 565 pounds (256 kg) from SIGSALY's 55 short tons (50,000 kg), and in 1961 the HY-2 voice coder, a 16-channel 2400 bit/s system, weighed 100 pounds (45 kg) and was the last implementation of a channel vocoder in a secure speech system. [ 12 ]
Later work in this field has since used digital speech coding . The most widely used speech coding technique is linear predictive coding (LPC). [ 13 ] Another speech coding technique, adaptive differential pulse-code modulation (ADPCM), was developed by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973. [ 14 ]
Even with the need to record several frequencies, and additional unvoiced sounds, the compression of vocoder systems is impressive. Standard speech-recording systems capture frequencies from about 500 to 3,400 Hz, where most of the frequencies used in speech lie, typically using a sampling rate of 8 kHz (slightly greater than the Nyquist rate ). The sampling resolution is typically 8 or more bits per sample resolution, for a data rate in the range of 64 kbit/s , but a good vocoder can provide a reasonably good simulation of voice with as little as 5 kbit/s of data.
Toll quality voice coders, such as ITU G.729 , are used in many telephone networks. G.729 in particular has a final data rate of 8 kbit/s with superb voice quality. G.723 achieves slightly worse quality at data rates of 5.3 and 6.4 kbit/s . Many voice vocoder systems use lower data rates, but below 5 kbit/s voice quality begins to drop rapidly. [ citation needed ]
Several vocoder systems are used in NSA encryption systems :
Modern vocoders that are used in communication equipment and in voice storage devices today are based on the following algorithms:
Vocoders are also currently used in psychophysics , linguistics , computational neuroscience and cochlear implant research.
Since the late 1970s, most non-musical vocoders have been implemented using linear prediction , whereby the target signal's spectral envelope (formant) is estimated by an all-pole IIR filter . In linear prediction coding, the all-pole filter replaces the bandpass filter bank of its predecessor and is used at the encoder to whiten the signal (i.e., flatten the spectrum) and again at the decoder to re-apply the spectral shape of the target speech signal.
One advantage of this type of filtering is that the location of the linear predictor's spectral peaks is entirely determined by the target signal, and can be as precise as allowed by the time period to be filtered. This is in contrast with vocoders realized using fixed-width filter banks, where the location of spectral peaks is constrained by the available fixed frequency bands. LP filtering also has disadvantages in that signals with a large number of constituent frequencies may exceed the number of frequencies that can be represented by the linear prediction filter. This restriction is the primary reason that LP coding is almost always used in tandem with other methods in high-compression voice coders.
Waveform-interpolative (WI) vocoder was developed at AT&T Bell Laboratories around 1995 by W.B. Kleijn, and subsequently, a low- complexity version was developed by AT&T for the DoD secure vocoder competition. Notable enhancements to the WI coder were made at the University of California, Santa Barbara . AT&T holds the core patents related to WI and other institutes hold additional patents. [ 24 ] [ 25 ] [ 26 ]
For musical applications, a source of musical sounds is used as the carrier, instead of extracting the fundamental frequency. For instance, one could use the sound of a synthesizer as the input to the filter bank, a technique that became popular in the 1970s.
Werner Meyer-Eppler , a German scientist with a special interest in electronic voice synthesis, published a thesis in 1948 on electronic music and speech synthesis from the viewpoint of sound synthesis . [ 27 ] Later he was instrumental in the founding of the Studio for Electronic Music of WDR in Cologne, in 1951. [ 28 ]
One of the first attempts to use a vocoder in creating music was the Siemens Synthesizer at the Siemens Studio for Electronic Music, developed between 1956 and 1959. [ 29 ] [ 30 ] [ media 2 ]
In 1968, Robert Moog developed one of the first solid-state musical vocoders for the electronic music studio of the University at Buffalo . [ 31 ]
In 1968, Bruce Haack built a prototype vocoder, named Farad after Michael Faraday . [ 32 ] It was first featured on "The Electronic Record For Children" released in 1969 and then on his rock album The Electric Lucifer released in 1970. [ 33 ] [ media 3 ]
Vocoder effects have been used by musicians in both electronic music and as a special effect along with more traditional instruments. In 1969, Sly and the Family Stone used it in "Sex Machine", a song on the album Stand! . Other artists who have made vocoders an essential part of their music, overall or during an extended phase. Examples include the German synthpop group Kraftwerk , the Japanese new wave group Polysics , Stevie Wonder (" Send One Your Love ", " A Seed's a Star ") and jazz/fusion keyboardist Herbie Hancock during his late 1970s period. In 1982 Neil Young used a Sennheiser Vocoder VSM201 on six of the nine tracks on Trans . [ 34 ] The chorus and bridge of Michael Jackson 's " P.Y.T. (Pretty Young Thing) ". features a vocoder ("Pretty young thing/You make me sing"), courtesy of session musician Michael Boddicker .
Among the most consistent users of the vocoder in emulating the human voice are Daft Punk , who have used this instrument from their first album Homework (1997) to their latest work Random Access Memories (2013) and consider the convergence of technological and human voice "the identity of their musical project". [ 35 ] For instance, the lyrics of " Around the World " (1997) are integrally vocoder-processed, " Get Lucky " (2013) features a mix of natural and processed human voices, and " Instant Crush " (2013) features Julian Casablancas singing into a vocoder.
Robot voices became a recurring element in popular music during the 20th century. Apart from vocoders, several other methods of producing variations on this effect include: the Sonovox , Talk box , Auto-Tune , [ media 4 ] linear prediction vocoders, speech synthesis , [ media 5 ] [ media 6 ] ring modulation and comb filter .
Vocoders are used in television production , filmmaking and games, usually for robots or talking computers. The robot voices of the Cylons in Battlestar Galactica were created with an EMS Vocoder 2000. [ 34 ] The 1980 version of the Doctor Who theme, as arranged and recorded by Peter Howell , has a section of the main melody generated by a Roland SVC-350 vocoder. A similar Roland VP-330 vocoder was used to create the voice of Soundwave , a character from the Transformers series.
|
https://en.wikipedia.org/wiki/Vocoder
|
In mathematics, a Vogan diagram , named after David Vogan , is a variation of the Dynkin diagram of a real semisimple Lie algebra that indicates the maximal compact subgroup. Although they resemble Satake diagrams they are a different way of classifying simple Lie algebras.
This algebra -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vogan_diagram
|
Voges–Proskauer / ˈ f oʊ ɡ ə s ˈ p r ɒ s k aʊ . ər / or VP is a test used to detect acetoin in a bacterial broth culture. The test is performed by adding alpha-naphthol and potassium hydroxide to the Voges-Proskauer broth, which is a glucose-phosphate broth that has been inoculated with bacteria. A cherry red color indicates a positive result, while a yellow-brown color indicates a negative result. [ 1 ]
The test depends on the digestion of glucose to acetylmethylcarbinol . In the presence of oxygen and strong base, the acetylmethylcarbinol is oxidized to diacetyl, which then reacts with
guanidine compounds commonly found in the peptone medium of the broth. Alpha-naphthol acts as a color enhancer, but the color change to red can occur without it.
Procedure: First, add the alpha-naphthol; then, add the potassium hydroxide. A reversal in the order of the reagents being added may result in a weak-positive or false-negative reaction.
VP is one of the four tests of the IMViC series, which tests for evidence of an enteric bacterium. The other three tests include: the indole test [I], the methyl red test [M], and the citrate test [C]. [ 2 ]
VP positive organisms include Enterobacter , Klebsiella , Serratia marcescens , Hafnia alvei , Vibrio cholerae biotype El Tor , and Vibrio alginolyticus . [ 3 ]
VP negative organisms include Citrobacter sp., Shigella , Yersinia , Edwardsiella , Salmonella , Vibrio furnissii , Vibrio fluvialis , Vibrio vulnificus , and Vibrio parahaemolyticus . [ 3 ]
The reaction was developed by Daniel Wilhelm Otto Voges and Bernhard Proskauer , German bacteriologists in 1898 at the Institute for Infectious Diseases .
|
https://en.wikipedia.org/wiki/Voges-Proskauer_reaction
|
Voges–Proskauer / ˈ f oʊ ɡ ə s ˈ p r ɒ s k aʊ . ər / or VP is a test used to detect acetoin in a bacterial broth culture. The test is performed by adding alpha-naphthol and potassium hydroxide to the Voges-Proskauer broth, which is a glucose-phosphate broth that has been inoculated with bacteria. A cherry red color indicates a positive result, while a yellow-brown color indicates a negative result. [ 1 ]
The test depends on the digestion of glucose to acetylmethylcarbinol . In the presence of oxygen and strong base, the acetylmethylcarbinol is oxidized to diacetyl, which then reacts with
guanidine compounds commonly found in the peptone medium of the broth. Alpha-naphthol acts as a color enhancer, but the color change to red can occur without it.
Procedure: First, add the alpha-naphthol; then, add the potassium hydroxide. A reversal in the order of the reagents being added may result in a weak-positive or false-negative reaction.
VP is one of the four tests of the IMViC series, which tests for evidence of an enteric bacterium. The other three tests include: the indole test [I], the methyl red test [M], and the citrate test [C]. [ 2 ]
VP positive organisms include Enterobacter , Klebsiella , Serratia marcescens , Hafnia alvei , Vibrio cholerae biotype El Tor , and Vibrio alginolyticus . [ 3 ]
VP negative organisms include Citrobacter sp., Shigella , Yersinia , Edwardsiella , Salmonella , Vibrio furnissii , Vibrio fluvialis , Vibrio vulnificus , and Vibrio parahaemolyticus . [ 3 ]
The reaction was developed by Daniel Wilhelm Otto Voges and Bernhard Proskauer , German bacteriologists in 1898 at the Institute for Infectious Diseases .
|
https://en.wikipedia.org/wiki/Voges–Proskauer_test
|
A voice frequency ( VF ) or voice band is the range of audio frequencies used for the transmission of speech .
In telephony , the usable voice frequency band ranges from approximately 300 to 3400 Hz . [ 2 ] It is for this reason that the ultra low frequency band of the electromagnetic spectrum between 300 and 3000 Hz is also referred to as voice frequency , being the electromagnetic energy that represents acoustic energy at baseband . The bandwidth allocated for a single voice-frequency transmission channel is usually 4 kHz, including guard bands , [ 2 ] allowing a sampling rate of 8 kHz to be used as the basis of the pulse-code modulation system used for the digital PSTN . Per the Nyquist–Shannon sampling theorem , the sampling frequency (8 kHz) must be at least twice the highest component of the voice frequency via appropriate filtering prior to sampling at discrete times (4 kHz) for effective reconstruction of the voice signal.
The voiced speech of a typical adult male will have a fundamental frequency from 90 to 155 Hz, and that of a typical adult female from 165 to 255 Hz. [ 3 ] Thus, the fundamental frequency of most speech falls below the bottom of the voice frequency band as defined. However, enough of the harmonic series will be present for the missing fundamental to create the impression of hearing the fundamental tone.
The speed of sound at room temperature (20°C) is 343.15 m/s. [ 4 ] Using the formula
we have:
Typical female voices range from 1.3 metres (4 ft ) to 2 metres (7 ft).
Typical male voices range from 2.2 metres (7 ft) to 4 metres (13 ft).
This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22. (in support of MIL-STD-188 ).
|
https://en.wikipedia.org/wiki/Voice_frequency
|
Voice logging is the practice of regularly recording telephone conversations. Business sectors which often do voice logging include public safety (e.g. 9-1-1 and emergency response systems), customer service call centers (conversations are recorded for quality assurance purposes), and finance (e.g. telephone-initiated stock trades are recorded for compliance purposes). Although voice logging is usually performed on conventional telephone lines, it is also frequently used for recording open microphones (e.g. on a stock trading floor) and for broadcast radio.
Early voice loggers recorded POTS lines onto analog magnetic tape. As telephony became more digital, so did voice loggers, and starting in the 1990s, voice loggers digitized the audio using a codec and recorded to digital tape. With modern VoIP systems, many voice loggers now simply store calls to a file on a hard drive.
The original voice logging system was a large analog tape recorder developed by Magnasync in 1950. In 1953, Magnasync Corporation sold 300 voice loggers to the U.S. Air Force.
|
https://en.wikipedia.org/wiki/Voice_logging
|
Voice over New Radio or Voice over 5G ( acronym VoNR or Vo5G ) is a high-speed wireless communication standard for voice services over 5G networks, utilizing mobile phones , data terminals, IoT devices , and wearables. [ 1 ] Like 4G networks, 5G does not natively support voice calls traditionally carried over circuit-switched technology. [ 2 ] Instead, voice communication is transmitted over the IP network , similar to IPTV services. To address this, Voice over NR (VoNR) is implemented, allowing voice calls to be carried over the 5G network using the same packet-switched infrastructure as other IP-based services, such as video streaming and messaging. [ 3 ]
Similar to how VoLTE enables voice calls on 4G networks, VoNR (Vo5G) serves as the 5G equivalent for voice communication, but it requires a 5G standalone (SA) network to function. [ 4 ] VoNR offers better voice quality than its 4G predecessor, primarily due to the inherent lower latency of 5G NR , allowing for faster call setup and improved overall communication. [ 5 ] Additionally, VoNR removes the LTE anchor, enabling the voice call to stay entirely within the 5G network. [ 6 ]
VoNR (Vo5G) calls are generally charged at the same rate as other calls, and to make a VoNR call, the device, its firmware, and the mobile telephone provider must all support the service and work together in the specific area.
This mobile technology related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Voice_over_NR
|
Voice phishing , or vishing , [ 1 ] is the use of telephony (often Voice over IP telephony) to conduct phishing attacks.
Landline telephone services have traditionally been trustworthy; terminated in physical locations known to the telephone company, and associated with a bill-payer. Now however, vishing fraudsters often use modern Voice over IP (VoIP) features such as caller ID spoofing and automated systems ( IVR ) to impede detection by law enforcement agencies. Voice phishing is typically used to steal credit card numbers or other information used in identity theft schemes from individuals.
Usually, voice phishing attacks are conducted using automated text-to-speech systems that direct a victim to call a number controlled by the attacker, however some use live callers. [ 1 ] Posing as an employee of a legitimate body such as the bank, police, telephone or internet provider, the fraudster attempts to obtain personal details and financial information regarding credit card, bank accounts (e.g. the PIN), as well as personal information of the victim. With the received information, the fraudster might be able to access and empty the account or commit identity fraud . Some fraudsters may also try to persuade the victim to transfer money to another bank account or withdraw cash to be given to them directly. [ 2 ] Callers also often pose as law enforcement or as an Internal Revenue Service employee. [ 3 ] [ 4 ] Scammers often target immigrants and the elderly, [ 5 ] who are coerced to wire hundreds to thousands of dollars in response to threats of arrest or deportation. [ 3 ]
Bank account data is not the only sensitive information being targeted. Fraudsters sometimes also try to obtain security credentials from consumers who use Microsoft or Apple products by spoofing the caller ID of Microsoft or Apple Inc.
Audio deepfakes have been used to commit fraud, by fooling people into thinking they are receiving instructions from a trusted individual. [ 6 ]
Common motives include financial reward, anonymity, and fame. [ 13 ] Confidential banking information can be utilized to access the victims' assets. Individual credentials can be sold to individuals who would like to hide their identity to conduct certain activities, such as acquiring weapons. [ 13 ] This anonymity is perilous and may be difficult to track by law enforcement. Another rationale is that phishers may seek fame among the cyber attack community. [ 13 ]
Voice phishing comes in various forms. There are various methods and various operation structures for the different types of phishing. Usually, scammers will employ social engineering to convince victims of a role they are playing and to create a sense of urgency to leverage against the victims.
Voice phishing has unique attributes that separate the attack method from similar alternatives such as email phishing. With the increased reach of mobile phones, phishing allows for the targeting of individuals without working knowledge of email but who possess a phone, such as the elderly. The historical prevalence of call centers that ask for personal and confidential information additionally allows for easier extraction of sensitive information from victims due to the trust many users have while speaking to someone on the phone. Through voice communication, vishing attacks can be personable and therefore more impactful than similar alternatives such as email. The faster response time to an attack attempt due to the increased accessibility to a phone is another unique aspect, in comparison to an email where the victim may take longer time to respond. [ 14 ] A phone number is difficult to block and scammers can often simply change phone numbers if a specific number is blocked and often find ways around rules and regulations. Phone companies and governments are constantly seeking new ways to curb false scam calls. [ 15 ]
A voice phishing attack may be initiated through different delivery mechanisms. [ 16 ] A scammer may directly call a victim and pretend to be a trustworthy person by spoofing their caller ID, appearing on the phone as an official or someone nearby. [ 16 ] Scammers may also deliver pre-recorded, threatening messages to victims' voicemail inboxes to coerce victims into taking action. [ 16 ] Victims may also receive a text message which requests them to call a specified number and be charged for calling the specific number. [ 16 ] Additionally, the victim may receive an email impersonating a bank; The victim then may be coerced into providing private information, such as a PIN, account number, or other authentication credentials in the phone call. [ 16 ]
Voice phishing attackers will often employ social engineering to convince victims to give them money and/or access to personal data. [ 17 ] Generally, scammers will attempt to create a sense of urgency and/or a fear of authority to use as a leverage against the victims. [ 16 ]
Voice phishing attacks can be difficult for victims to identify because legitimate institutions such as banks sometimes ask for sensitive personal information over the phone. [ 8 ] Phishing schemes may employ pre-recorded messages of notable, regional banks to make them indistinguishable from legitimate calls. [ citation needed ] Additionally, victims, particularly the elderly, [ 8 ] may forget or not know about scammers' ability to modify their caller ID, making them more vulnerable to voice phishing attacks. [ citation needed ]
The US Federal Trade Commission (FTC) suggests several ways for the average consumer to detect phone scams. [ 22 ] The FTC warns against making payments using cash, gift cards, and prepaid cards, and asserts that government agencies do not call citizens to discuss personal information such as Social Security numbers. [ 22 ] Additionally, potential victims can pay attention to characteristics of the phone call, such as the tone or accent of the caller [ 8 ] [ 28 ] or the urgency of the phone call [ 22 ] to determine whether or not the call is legitimate.
The primary strategy recommended by the FTC to avoid falling victim to voice phishing is to not answer calls from unknown numbers. [ 9 ] However, when a scammer utilizes VoIP to spoof their caller ID, or in circumstances where victims do answer calls, other strategies include not pressing buttons when prompted, and not answering any questions asked by a suspicious caller. [ 9 ]
On March 31, 2020, in an effort to reduce vishing attacks that utilize caller ID spoofing, the US Federal Communications Commission adopted a set of mandates known as STIR/SHAKEN , a framework intended to be used by phone companies to authenticate caller ID information. [ 29 ] All U.S. phone service providers had until June 30, 2021, to comply with the order and integrate STIR/SHAKEN into their infrastructure to lessen the impact of caller ID spoofing. [ 29 ]
In some countries, social media is used to call and communicate with the public. On certain social media platforms, government and bank profiles are verified and unverified government and bank profiles would be fake profiles. [ 30 ]
The most direct and effective mitigation strategy is training the general public to understand common traits of a voice phishing attack to detect phishing messages. [ 31 ] A more technical approach would be the use of software detection methods. Generally, such mechanisms are able to differentiate between phishing calls and honest messages and can be more cheaply implemented than public training. [ 31 ]
A straightforward method of phishing detection is the usage of blacklists. Recent research has attempted to make accurate distinctions between legitimate calls and phishing attacks using artificial intelligence and data analysis. [ 32 ] To further advance research in the fake audio field, different augmentations and feature designs have been explored. [ 33 ] By analyzing and converting phone calls to texts, artificial intelligence mechanisms such as natural language processing can be used to identify if the phone call is a phishing attack. [ 32 ]
Specialized systems, such as phone apps, can submit fake data to phishing calls. Additionally, various law enforcement agencies are continually making efforts to discourage scammers from conducting phishing calls by imposing harsher penalties upon attackers. [ 31 ] [ 29 ]
Between 2012 and 2016, a voice phishing scam ring posed as Internal Revenue Service and immigration employees to more than 50,000 individuals, stealing hundreds of millions of dollars as well as victims' personal information. [ 5 ] Alleged co-conspirators from the United States and India threatened vulnerable respondents with "arrest, imprisonment, fines, or deportation." [ 5 ] In 2018, 24 defendants were sentenced, with the longest imprisonment being 20 years. [ 5 ]
On March 28, 2021, the Federal Communications Commission issued a statement warning Americans of the rising number of phone scams regarding fraudulent COVID-19 products. [ 34 ] Voice phishing schemes attempting to sell products which putatively "prevent, treat, mitigate, diagnose or cure" COVID-19 have been monitored by the Food and Drug Administration as well. [ 35 ]
Beginning in 2015, a phishing scammer impersonated Hollywood make-up artists and powerful female executives to coerce victims to travel to Indonesia and pay sums of money under the premise that they'll be reimbursed. Using social engineering, the scammer researched the lives of their victims extensively to mine details to make the impersonation more believable. The scammer called victims directly, often multiple times a day and for hours at a time to pressure victims. [ 36 ]
The 2015 cyber attack campaign against the Israeli academic Dr. Thamar Eilam Gindin illustrates the use of a vishing attack as a precursor to escalating future attacks with the new information coerced from a victim. After the Iran-expert academic mentioned connections within Iran on Israeli Army Radio, Thamar received a phone call to request an interview with the professor for the Persian BBC. To view the questions ahead of the proposed interview, Thamar was instructed to access a Google Drive document that requested her password for access. By entering her password to access the malicious document, the attacker can use the credentials for further elevated attacks. [ 37 ]
In Sweden, Mobile Bank ID is a phone app (launched 2011) that is used to identify a user in internet banking. The user logs in to the bank on a computer, the bank activates the phone app, the user enters a password in the phone and is logged in. In this scam, malicious actors called people claiming to be a bank officer, claimed there was a security problem, and asked the victim to use their Mobile Bank ID app. Fraudsters were then able to log in to the victim's account without the victim providing their password. The fraudster was then able to transfer money from the victim's account. If the victim was a customer of the Swedish bank Nordea , scammers were also able to use the victim's account directly from their phone. In 2018, the app was changed to require users to photograph a QR code on their computer screen. This ensures that the phone and the computer are colocated, which has largely eliminated this type of fraud.
|
https://en.wikipedia.org/wiki/Voice_phishing
|
Voice synthesizer software include:
This software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Voice_synthesizer_software
|
A voice warning system is a system designed to alert the crew of an aircraft to imminent safety hazards. It is often known as a Bitchin' Betty , a slang term used by some pilots and aircrew and submariners (mainly North American ).
The annunciation voice, in at least some aircraft systems, may be either male or female. In some cases, this may be selected according to pilot preference. [ citation needed ] If the voice is female, it may be referred to as Bitching Betty; if the voice is male, it may be referred to as Barking Bob . [ 1 ] A female voice is heard on military aircraft such as the F-16 Fighting Falcon , [ 2 ] the Eurofighter Typhoon and the Mikoyan MiG-29 . A male voice is heard on Boeing commercial airliners and is also used in the BAE Hawk .
In the United Kingdom , the term Nagging Nora is sometimes used. [ 3 ] [ 4 ] In New Zealand, the term used for Boeing aircraft is Hank the Yank . The voice warning system used on London Underground trains, which also uses a female voice, is known to some staff as Sonya , as it "get s on ya nerves". [ 5 ]
There are two notable systems, which employ voice warnings, and which are found in most commercial and military aircraft: TCAS ( traffic collision avoidance system ) and TAWS/EGPWS ( terrain avoidance warning system / enhanced ground proximity warning system ). Both systems provide warnings and verbal instructions.
The auditory warnings produced by these systems usually include a separate attention-getting sound, followed by one or more verbal commands to the pilot/crew. Perhaps the most widely known example, encountered in many video games and movies, is the "Pull up! Pull up!" command. Other common spoken warnings are "Terrain, terrain", " Windshear ! Windshear!", or "Traffic! Traffic!". These may be followed by short directions to the pilot, advising how the situation may be resolved. TCAS and TAWS/EGPWS are usually integrated to prevent conflicting advice, such as an instruction to "Descend! Descend!" to avoid another aircraft when the aircraft is already close to the ground.
Modern Boeing and Airbus airliners both feature a male voice, which calls out height above terrain on approach to landing , and other related warnings. Airbus aircraft feature a distinctive British RP accent (heard on recent builds of the A320 and all Airbus aircraft since the A330 and A340 ), or a French accent (heard on ECAM-equipped A300s , A310s and early A320s).
A female voice was incorporated into McDonnell Douglas DC-9 , MD-80/90 , McDonnell Douglas MD-11 and Boeing 717 (inherited from McDonnell Douglas after the merger with Boeing) series aircraft in their Central Aural Warning Systems (CAWS). These systems provided a voice for most warnings, including fire, altitude, cabin altitude, stall, overspeed, autopilot disconnect, and so on.
In more advanced cockpits, on newer aircraft, there may be many other voice warnings managed by an integrated indication and crew alerting system (ICAS) such as "Gear up, Gear up!" These may be warning words or phrases, or simply declarative statements that augment the pilot's situation awareness .
Early human factors research in aircraft and other domains indicated that female voices were more authoritative to male pilots and crew members and were more likely to get their attention. [ citation needed ] Much of this research was based on pilot experiences, particularly in combat situations, where the pilots were being guided by female air traffic controllers . [ citation needed ] They reported being able to most easily pick out the female voice from amid the flurry of radio chatter. [ citation needed ]
In October 1996, a report by UK's Defence Research Agency on the fast jet Collision Warning System Technical Demonstrator Programme (Reference DRAMS/A VS/CR96294/1) reported: "The primary alerting signal from the CWS to the crew was an audible warning passed over the aircraft intercom system. For the first flight these warnings were given in a male voice but, on the advice of the crew, this was changed to a female voice for the second flight onwards. They said that a female voice offered greater clarity." [ 6 ]
More recent research, however, carried out since more women have been employed as pilots and air traffic controllers, indicates that the original popular hypothesis may be unreliable. Edworthy and colleagues in 2003, based at the University of Plymouth in UK, for example, found that both acoustic and non-acoustic differences between male and female speakers were negligible. Therefore, they recommended the choice of speaker should depend on the overlap of noise and speech spectra. Female voices did appear to have an advantage because they could portray a greater range of urgencies due to their generally higher pitch and pitch range. They reported an experiment showing that knowledge about the sex of a speaker has no effect on judgments of perceived urgency, with acoustic variables accounting for such differences. [ 7 ]
Arrabito in 2009, however, at Defence Research and Development Canada in Toronto , found that with simulated cockpit background radio traffic, a male voice rather than a female voice, in a monotone or urgent announcing style, resulted in the largest proportion of correct and fastest identification response times to verbal warnings, regardless of the gender of the listener. [ 8 ]
There have been several "Bitching Bettys", over the years, for various commercial and military aircraft:
Voice warning systems included in cars of the late 1970s to early 1980s, such as the Datsun and Nissan "Z-Car" series, found in the 280ZX and 1984–1988 300ZX (optional in the base model and standard in the Turbo model), and the Datsun Maxima and Nissan Maxima of the early 1980s, were also known as Bitching Betty. The Datsun system issued commands such as "lights are on", or "left door is open". The system used a small box located under the vehicle's dashboard that implemented a small, white plastic record disc that used a magnetic cartridge to play spoken commands through the vehicle's audio system 's speakers, similar to that of some Texas Instruments talking toys of the time period. Datsun 's original name for the feature was "Talking Lady". [ citation needed ]
Some Acuras ( Honda 's luxury car marque in the United States , Canada , and China ) of the mid 2000s would ask the driver to "please fasten your seatbelt " when the driver's seatbelt was not fastened, in addition to a chime warning. [ citation needed ]
The M1 Abrams tank features a female voice warning system to warn the crew of open hatches and critical faults when equipped with the System Enhancement Package. [ citation needed ]
|
https://en.wikipedia.org/wiki/Voice_warning_system
|
The void ratio ( e {\displaystyle e} ) of a mixture of solids and fluids (gases and liquids), or of a porous composite material such as concrete , is the ratio of the volume of the voids ( V V {\displaystyle V_{V}} ) filled by the fluids to the volume of all the solids ( V S {\displaystyle V_{S}} ).
It is a dimensionless quantity in materials science and in soil science , and is closely related to the porosity (often noted as ϕ {\displaystyle \phi } , or η {\displaystyle {\eta }} , depending on the convention), the ratio of the volume of voids ( V V {\displaystyle V_{V}} ) to the total (or bulk) volume ( V T {\displaystyle V_{T}} ), as follows:
in which, for idealized porous media with a rigid and undeformable skeleton structure ( i.e., without variation of total volume ( V T {\displaystyle V_{T}} ) when the water content of the sample changes (no expansion or swelling with the wetting of the sample); nor contraction or shrinking effect after drying of the sample), the total (or bulk) volume ( V T {\displaystyle V_{T}} ) of an ideal porous material is the sum of the volume of the solids ( V S {\displaystyle V_{S}} ) and the volume of voids ( V V {\displaystyle V_{V}} ):
(in a rock , or in a soil , this also assumes that the solid grains and the pore fluid are clearly separated, so swelling clay minerals such as smectite , montmorillonite , or bentonite containing bound water in their interlayer space are not considered here.)
and
where e {\displaystyle e} is the void ratio, ϕ {\displaystyle \phi } is the porosity , V V is the volume of void-space (gases and liquids), V S is the volume of solids, and V T is the total (or bulk) volume. This figure is relevant in composites , in mining (particular with regard to the properties of tailings ), and in soil science . In geotechnical engineering , it is considered one of the state variables of soils and represented by the symbol e {\displaystyle e} . [ 1 ] [ 2 ]
Note that in geotechnical engineering , the symbol ϕ {\displaystyle \phi } usually represents the angle of shearing resistance, a shear strength (soil) parameter. Because of this, in soil science and geotechnics, these two equations are usually presented using η {\displaystyle {\eta }} for porosity: [ 3 ] [ 4 ]
and
where e {\displaystyle e} is the void ratio, η {\displaystyle {\eta }} is the porosity, V V is the volume of void-space (air and water), V S is the volume of solids, and V T is the total (or bulk) volume. [ 5 ]
|
https://en.wikipedia.org/wiki/Void_ratio
|
Voided biaxial slabs , sometimes called biaxial slabs or voided slabs , are a type of reinforced concrete slab which incorporates air-filled voids to reduce the volume of concrete required. These voids enable cheaper construction and less environmental impact. [ citation needed ] Another major benefit of the system is its reduction in slab weight compared with regular solid decks. Up to 50% of the slab volume may be removed in voids, resulting in less load on structural members. [ 1 ] This also allows increased weight and/or span, since the self-weight of the slab contributes less to the overall load.
Concrete has numerous applications in building construction, but its use for horizontal slabs is limited by its relatively high density which reduces the maximum span. [ 2 ] The usual method of rectifying this disadvantage is to incorporate some kind of reinforcement, which enables concrete slabs to be used for a broad range of spans and loading conditions. [ 3 ] Traditional approaches to structural reinforcement involve embedding another material inside the concrete, however, biaxial slabs provide an alternative solution in the form of a two-way slab which incorporates orthogonal concrete "beams" within the slab. [ 4 ] This allows greater support in both horizontal directions in order to transfer weight to a vertical member. [ 5 ]
The general concept of voided biaxial slabs relies on voids created within the concrete at the time of casting. This creates an internal array of hollow boxes in the slab, which acts as grid of horizontal supports for the flat surface on top. Another advantage is the reduction in weight, achieved by removing mass which does not directly transfer weight to a vertical member. Typical solid slabs have a loading capacity of around one-third of their own weight, which can create problems for long spans and high loadings. [ 2 ] By reducing the weight of the slab without compromising its structural strength, it is possible to create a thicker slab to support more weight over a longer span.
Hollow-core slabs , also known as voided slabs, initially appeared as one-way elements in Europe during the 1950s, and are still commonly manufactured in precast form for applications where fast construction and low self-weight are required. [ 2 ] [ 6 ] Waffle slabs are a common type of hollow-core slab which use the same principle as voided biaxial slabs. However, their voids are placed on the underside of the slab rather than embedded within the slab, leading to lower shear strength and fire resistance. [ 7 ] There has been a range of proprietary implementations of voided biaxial slabs, including the use of polystyrene blocks as a filler material in the voids. [ 6 ] However, many implementations have suffered from flexural cracking and lack of shear resistance. [ 2 ] [ 6 ]
All voided biaxial slabs incorporate an array of rigid void formers which contain air within the voids. These void formers are most commonly made of plastic such as high-density polyethylene , and may use recycled materials. [ 7 ] The void formers are produced in a variety of shapes depending on the design of the slab. Common designs include spheres, boxes, ellipsoids and toroids . [ 6 ]
The voids are usually placed in a grid-like arrangement, temporarily supported by a framework which is eventually enveloped in concrete. [ 7 ] This framework has been implemented in various ways, but the most efficient method uses a steel mesh in order to reduce material use and create an optimal geometric proportion between concrete, reinforcement, and voids. [ 2 ]
The voids are positioned in the middle of the cross section, where concrete is least beneficial to the structure. The integrity of the solid layers is maintained, as the top and bottom of the slab can experience particularly high stresses. This enables the slab to effectively resist both positive and negative bending moments. [ 2 ]
Since the underside of the slab is flat it may be finished to create an interior ceiling, in contrast to the contoured underside of waffle slabs.
Some vendors of voided biaxial slabs supply prefabricated components which are quicker to install onsite. Prefabricated slabs also have the advantage of a smooth underside suitable for use as a ceiling without further finishing. Varying degrees of prefabrication are available, including entire slabs. [ 1 ] Prefabricated modules commonly consist of a fully cast piece of slab, including all components encased in concrete. This technique consists of a "bubble-reinforced sandwich" of reinforcing mesh and voids cast in concrete. A contiguous layer of smooth finish concrete is then poured onsite, along with the addition of structural anchoring to fix the modules together. [ 8 ]
Voided biaxial slabs cast onsite take longer to construct than prefabricated slabs, but are sometimes cheaper. In a typical casting procedure, a decking of formwork is constructed out of metal or wood. This provides temporary support for the voids and the curing concrete. After the decking is constructed, reinforcing mesh is installed to support the voids. Alternatively, the voids and mesh may be supplied as a prefabricated module. Since the air in the voids is of lower density than the surrounding concrete, it tends to float to the surface of the concrete. To ameliorate this, the slab may be cast in multiple layers so that the mesh is initially anchored and is then able to restrain the voids from floating upwards in later pours. [ 9 ]
In 2017 the BubbleDeck system caused controversy due to the collapse of a parking garage at Eindhoven airport in the Netherlands . [ 10 ] This was due to insufficient shear strength at the interface between the precast concrete slabs, potentially caused by high temperatures during construction. [ 11 ] After the incident an investigation was started among buildings using the same flooring system, leading to the closure of several buildings in the Netherlands, including one at the University of Rotterdam and a school building under construction in Hoeven . [ 12 ]
Investigations according to Eurocodes have concluded that voided biaxial slabs may be modelled like solid slabs. To which degree depends on the shape of the voids. [ 13 ] This is considered an advantage over one-way ribbed slabs, which must be calculated as an array of beams.
Compared to traditional solid slabs, the reduced self-weight of biaxial slabs allows for longer spans and/or reduced deck thickness. The overall mass of concrete can be reduced by 35–50% depending on the design, [ 1 ] as a consequence of reduced slab mass, as well as lower requirements for vertical structure and foundations. Biaxial slabs commonly span up to 20 metres at a thickness of around 500 mm. [ citation needed ] The added strength also reduces the acoustic transmittance of the slab for low frequencies.
The reduced mass of biaxial slabs also results in a more environmentally friendly product which produces less CO 2 emissions both in its construction and indirectly through the reduction of surrounding structural support. Total carbon emissions may be reduced by up to 41%. [ 1 ] Slabs are one of the greatest consumers of concrete in many buildings, [ 14 ] so reducing the slab mass can make a relatively large difference to the environmental impact of a building's construction.
Biaxial slabs may be marginally cheaper than solid slabs, partly due to the lower mass. If using prefabricated versions, labor can also be significantly reduced, resulting in faster and cheaper construction. This can yield time savings of up to 40% compared with traditional solid slabs. [ 1 ] However, this is heavily dependent on the particular system, and systems relying on onsite placement of void formers requires much more labor than solid slabs. [ 2 ]
Compared to one-way hollow-core slabs, biaxial slabs are more resistant to seismic disturbance. One-way decks are supported by a combination of walls and beams, leading to a relatively rigid structure which increases the risk of progressive collapse. [ 15 ]
One of the most significant differences between solid slabs and voided biaxial slabs is their resistance to shear force. Due to a lower volume of concrete, the shear resistance is also reduced. [ 2 ] For slabs using spherical voids, the shear resistance is approximately proportional to the volume of concrete, as the geometry of the voids causes efficient transfer of force to load-bearing parts, enabling all the concrete to be effective. Other shapes of voids, with flat or flattened surfaces, will result in more concrete and/or less strength. This relates especially to shear capacity, where the capacity of a slab with boxes can be 40% lower than for a slab of identical height using spherical voids. For punching shear, the capacity of a slab with spherical voids can be 600% higher than for a box slab. In some cases where greater shear resistance is required in a localised area (such as junctions with piers or walls), the voids may be omitted, leading to a partially solid slab. [ 13 ]
|
https://en.wikipedia.org/wiki/Voided_biaxial_slab
|
Voids in Mineral Aggregate or VMA is the intergranular space occupied by asphalt and air in a compacted asphalt mixture. In a component diagram , it is the sum of the volume of air and the volume of effective asphalt. The volume of absorbed asphalt is not considered to be a part of VMA because it is part of the pore structure of the mineral aggregate .
VMA= V effective asphalt + V air voids
This material -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Voids_in_mineral_aggregate
|
A voie verte or greenway is an autonomous communication route reserved for non-motorized traffic, such as pedestrians and cyclists . Voies vertes are developed with a view to integrated development that enhances the environment , heritage, quality of life, and user-friendliness. In Europe, they have been organized since October 1997 within the framework of the European Green Network [ 1 ] [ 2 ] to coordinate and regulate uses often prohibited in certain countries or that compete with motorized practices. [ 3 ]
In this regard, towpaths , old rural paths, and disused railway tracks are privileged mediums for the development of voies vertes. [ 4 ] If managed appropriately (through sustainable gardening and restoration ecology , and without the use of pesticides in the surroundings, which can then potentially play a role in the green infraestructure and blue network), voires vertes are one of the elements of sustainable development policies in the relevant areas.
For English speakers, greenways refers to voies vertes, but also more generally to "a road that is good from an environmental point of view" (Turner, 1995, [ 5 ] or - in England, according to a survey cited by Turner in 2006: "a linear space containing elements planned, designed, and managed for multiple purposes, including ecological, recreational, cultural, aesthetic, and others compatible with the concept of sustainable land use") or a wide range of landscape and urban planning strategies including, to varying degrees, an environmental concern associated with transportation infrastructure, [ 6 ] [ 7 ] the edges of which have often acquired special value [ 8 ] and are sometimes associated with the concept of a biological corridor in Europe. [ 9 ]
From 1975 to 1995, voies vertes proliferated significantly in the urban landscapes of so-called developed countries. [ 10 ] For example, by 1995, more than 500 communities were building them in North America alone. They address new human needs while also extending some of the functions of ancient rural roads. More than simple facilities or landscaping, they increasingly aim to provide a counterbalance to the loss of natural landscape in the context of increasing urbanization and agricultural industrialization. As times changed, the notion of chemins verts ou corridors verts evolved to meet new needs and challenges. [ 10 ]
Three distinct stages (or "generations") of voies vertes can be identified as forms of urban and peri-urban landscape:
In Belgium, a network of 2200 km of voies vertes was already defined in 2003, of which 900 km were developed. [ 12 ]
In the Walloon Region , they form the RAVeL network .
In Flanders , there is a network of towpaths , railway trails , and other independent cycle paths. Most are integrated into the numbered-node cycle networks of the provinces , or belong to LF-routes (Dutch: lange-afstandsfietsrout e, long-distance tourist cycle routes) or the bicycle highway network (Dutch: fietssnelweg , utilitarian voies vertes providing direct routes between and around cities).
In the Netherlands, the situation and terminology are comparable to Flanders, with the difference that there are few rail trails and many other independent cycle paths.
In France, a decree of 16 September 2004 introduced voies vertes into the Highway Code: voies vertes are defined as roads "exclusively reserved for the circulation of non-motorized vehicles, pedestrians and horse riders [ 13 ] ."
In Switzerland, there's a cross-border voie verte from Geneva to Annemasse . [ 13 ] A voie verte through Lausanne (along the railroad tracks) is programmed for completion in 2018. [ 14 ]
They are most often developed on old railway lines, [ 15 ] towpaths , [ 16 ] roads closed to automobile traffic, and cultural routes ( Roman roads , pilgrimage routes). They have certain characteristics:
Voies vertes also offer services, located in preserved old facilities such as former railway stations and lockkeeper's houses. These services can be of various types: accommodation, museums , bike rental, equestrian accommodation, community centers, etc. They cater to both local users and tourists. voies vertes are provided with information (maps, brochures, etc.) about the route itself and nearby sites. For example, several tens of kilometers of the former coastal railway of the Chemins de Fer de Provence have been converted into a cycle path between Toulon and Pramousquier (in the municipality of Le Lavandou ).
This example illustrates the main criticism of voies vertes, namely the fact that they sometimes contribute to downgrading and therefore definitively condemning railway lines that could potentially be reopened for collectivization and decarbonization of travel in peri-urban or rural areas, instead of taking up space on roads. This competition between two complementary modes in an era of energy transition inducing increasing decarbonization of travel can therefore be ironic. [ 17 ]
|
https://en.wikipedia.org/wiki/Voie_verte
|
In mathematics , Voigt notation or Voigt form in multilinear algebra is a way to represent a symmetric tensor by reducing its order. [ 1 ] There are a few variants and associated names for this idea: Mandel notation , Mandel–Voigt notation and Nye notation are others found. Kelvin notation is a revival by Helbig [ 2 ] of old ideas of Lord Kelvin . The differences here lie in certain weights attached to the selected entries of the tensor. Nomenclature may vary according to what is traditional in the field of application. The notation is named after physicists Woldemar Voigt [ 1 ] & John Nye (scientist) .
For example, a 2×2 symmetric tensor X has only three distinct elements, the two on the diagonal and the other being off-diagonal. Thus its rank can be reduced by expressressing it as a vector without loss of information:
X = [ x 11 x 12 x 12 x 22 ] = [ x 11 x 22 x 12 ] . {\displaystyle X={\begin{bmatrix}x_{11}&x_{12}\\x_{12}&x_{22}\end{bmatrix}}={\begin{bmatrix}x_{11}\\x_{22}\\x_{12}\end{bmatrix}}.}
Voigt notation is used in materials science to simplify the representation of the rank-2 stress and strain tensors, and fourth-rank stiffness and compliance tensors.
The 3×3 stress and strain tensors in their full forms can be written as:
Voigt notation then utilises the symmetry of these matrices ( σ 12 = σ 21 {\displaystyle \sigma _{12}=\sigma _{21}} and so on) to express them instead as a 6×1 vector:
where γ 12 = 2 ε 12 {\displaystyle \gamma _{12}=2\varepsilon _{12}} , γ 23 = 2 ε 23 {\displaystyle \gamma _{23}=2\varepsilon _{23}} , and γ 13 = 2 ε 13 {\displaystyle \gamma _{13}=2\varepsilon _{13}} are the engineering shear strains.
The benefit of using different representations for stress and strain is that the scalar invariance σ ⋅ ε = σ i j ε i j = σ _ ⋅ ε _ {\displaystyle {\boldsymbol {\sigma }}\cdot {\boldsymbol {\varepsilon }}=\sigma _{ij}\varepsilon _{ij}={\underline {\sigma }}\cdot {\underline {\varepsilon }}} is preserved.
This notation now allows the three-dimensional symmetric fourth-order stiffness , C {\displaystyle C} , and compliance, S {\displaystyle S} , tensors to be reduced to 6×6 matrices:
C i j k l ⇒ C α β = [ C 11 C 12 C 13 C 14 C 15 C 16 C 12 C 22 C 23 C 24 C 25 C 26 C 13 C 23 C 33 C 34 C 35 C 36 C 14 C 24 C 34 C 44 C 45 C 46 C 15 C 25 C 35 C 45 C 55 C 56 C 16 C 26 C 36 C 46 C 56 C 66 ] . {\displaystyle C_{ijkl}\Rightarrow C_{\alpha \beta }={\begin{bmatrix}C_{11}&C_{12}&C_{13}&C_{14}&C_{15}&C_{16}\\C_{12}&C_{22}&C_{23}&C_{24}&C_{25}&C_{26}\\C_{13}&C_{23}&C_{33}&C_{34}&C_{35}&C_{36}\\C_{14}&C_{24}&C_{34}&C_{44}&C_{45}&C_{46}\\C_{15}&C_{25}&C_{35}&C_{45}&C_{55}&C_{56}\\C_{16}&C_{26}&C_{36}&C_{46}&C_{56}&C_{66}\end{bmatrix}}.}
A simple mnemonic rule for memorizing Voigt notation is as follows:
Voigt indexes are numbered consecutively from the starting point to the end (in the example, the numbers in blue).
The diagram below also shows the order of the indices: i j = ⇓ α = 11 22 33 23 , 32 13 , 31 12 , 21 ⇓ ⇓ ⇓ ⇓ ⇓ ⇓ 1 2 3 4 5 6 {\displaystyle {\begin{matrix}ij&=\\\Downarrow &\\\alpha &=\end{matrix}}{\begin{matrix}11&22&33&23,32&13,31&12,21\\\Downarrow &\Downarrow &\Downarrow &\Downarrow &\Downarrow &\Downarrow &\\1&2&3&4&5&6\end{matrix}}}
For a symmetric tensor of second rank σ = [ σ 11 σ 12 σ 13 σ 21 σ 22 σ 23 σ 31 σ 32 σ 33 ] {\displaystyle {\boldsymbol {\sigma }}={\begin{bmatrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\end{bmatrix}}} only six components are distinct, the three on the diagonal and the others being off-diagonal.
Thus it can be expressed, in Mandel notation, [ 3 ] as the vector σ ~ M = ⟨ σ 11 , σ 22 , σ 33 , 2 σ 23 , 2 σ 13 , 2 σ 12 ⟩ . {\displaystyle {\tilde {\sigma }}^{M}=\langle \sigma _{11},\sigma _{22},\sigma _{33},{\sqrt {2}}\sigma _{23},{\sqrt {2}}\sigma _{13},{\sqrt {2}}\sigma _{12}\rangle .}
The main advantage of Mandel notation is to allow the use of the same conventional operations used with vectors,
for example: σ ~ : σ ~ = σ ~ M ⋅ σ ~ M = σ 11 2 + σ 22 2 + σ 33 2 + 2 σ 23 2 + 2 σ 13 2 + 2 σ 12 2 . {\displaystyle {\tilde {\sigma }}:{\tilde {\sigma }}={\tilde {\sigma }}^{M}\cdot {\tilde {\sigma }}^{M}=\sigma _{11}^{2}+\sigma _{22}^{2}+\sigma _{33}^{2}+2\sigma _{23}^{2}+2\sigma _{13}^{2}+2\sigma _{12}^{2}.}
A symmetric tensor of rank four satisfying D i j k l = D j i k l {\displaystyle D_{ijkl}=D_{jikl}} and D i j k l = D i j l k {\displaystyle D_{ijkl}=D_{ijlk}} has 81 components in three-dimensional space, but only 36
components are distinct. Thus, in Mandel notation, it can be expressed as D ~ M = ( D 1111 D 1122 D 1133 2 D 1123 2 D 1113 2 D 1112 D 2211 D 2222 D 2233 2 D 2223 2 D 2213 2 D 2212 D 3311 D 3322 D 3333 2 D 3323 2 D 3313 2 D 3312 2 D 2311 2 D 2322 2 D 2333 2 D 2323 2 D 2313 2 D 2312 2 D 1311 2 D 1322 2 D 1333 2 D 1323 2 D 1313 2 D 1312 2 D 1211 2 D 1222 2 D 1233 2 D 1223 2 D 1213 2 D 1212 ) . {\displaystyle {\tilde {D}}^{M}={\begin{pmatrix}D_{1111}&D_{1122}&D_{1133}&{\sqrt {2}}D_{1123}&{\sqrt {2}}D_{1113}&{\sqrt {2}}D_{1112}\\D_{2211}&D_{2222}&D_{2233}&{\sqrt {2}}D_{2223}&{\sqrt {2}}D_{2213}&{\sqrt {2}}D_{2212}\\D_{3311}&D_{3322}&D_{3333}&{\sqrt {2}}D_{3323}&{\sqrt {2}}D_{3313}&{\sqrt {2}}D_{3312}\\{\sqrt {2}}D_{2311}&{\sqrt {2}}D_{2322}&{\sqrt {2}}D_{2333}&2D_{2323}&2D_{2313}&2D_{2312}\\{\sqrt {2}}D_{1311}&{\sqrt {2}}D_{1322}&{\sqrt {2}}D_{1333}&2D_{1323}&2D_{1313}&2D_{1312}\\{\sqrt {2}}D_{1211}&{\sqrt {2}}D_{1222}&{\sqrt {2}}D_{1233}&2D_{1223}&2D_{1213}&2D_{1212}\\\end{pmatrix}}.}
It is useful, for example, in calculations involving constitutive models to simulate materials, such as the generalized Hooke's law , as well as finite element analysis , [ 4 ] and Diffusion MRI . [ 5 ]
Hooke's law has a symmetric fourth-order stiffness tensor with 81 components (3×3×3×3), but because the application of such a rank-4 tensor to a symmetric rank-2 tensor must yield another symmetric rank-2 tensor, not all of the 81 elements are independent. Voigt notation enables such a rank-4 tensor to be represented by a 6×6 matrix. However, Voigt's form does not preserve the sum of the squares, which in the case of Hooke's law has geometric significance. This explains why weights are introduced (to make the mapping an isometry ).
A discussion of invariance of Voigt's notation and Mandel's notation can be found in Helnwein (2001). [ 6 ]
|
https://en.wikipedia.org/wiki/Voigt_notation
|
The Voigt profile (named after Woldemar Voigt ) is a probability distribution given by a convolution of a Cauchy-Lorentz distribution and a Gaussian distribution . It is often used in analyzing data from spectroscopy or diffraction .
Without loss of generality, we can consider only centered profiles, which peak at zero. The Voigt profile is then
where x is the shift from the line center, G ( x ; σ ) {\displaystyle G(x;\sigma )} is the centered Gaussian profile:
and L ( x ; γ ) {\displaystyle L(x;\gamma )} is the centered Lorentzian profile:
The defining integral can be evaluated as:
where Re[ w ( z )] is the real part of the Faddeeva function evaluated for
In the limiting cases of σ = 0 {\displaystyle \sigma =0} and γ = 0 {\displaystyle \gamma =0} then V ( x ; σ , γ ) {\displaystyle V(x;\sigma ,\gamma )} simplifies to L ( x ; γ ) {\displaystyle L(x;\gamma )} and G ( x ; σ ) {\displaystyle G(x;\sigma )} , respectively.
In spectroscopy, a Voigt profile results from the convolution of two broadening mechanisms, one of which alone would produce a Gaussian profile (usually, as a result of the Doppler broadening ), and the other would produce a Lorentzian profile. Voigt profiles are common in many branches of spectroscopy and diffraction . Due to the expense of computing the Faddeeva function , the Voigt profile is sometimes approximated using a pseudo-Voigt profile.
The Voigt profile is normalized:
since it is a convolution of normalized profiles. The Lorentzian profile has no moments (other than the zeroth), and so the moment-generating function for the Cauchy distribution is not defined. It follows that the Voigt profile will not have a moment-generating function either, but the characteristic function for the Cauchy distribution is well defined, as is the characteristic function for the normal distribution . The characteristic function for the (centered) Voigt profile will then be the product of the two:
Since normal distributions and Cauchy distributions are stable distributions , they are each closed under convolution (up to change of scale), and it follows that the Voigt distributions are also closed under convolution.
Using the above definition for z , the cumulative distribution function (CDF) can be found as follows:
Substituting the definition of the Faddeeva function (scaled complex error function ) yields for the indefinite integral:
which may be solved to yield
where 2 F 2 {\displaystyle {}_{2}F_{2}} is a hypergeometric function . In order for the function to approach zero as x approaches negative infinity (as the CDF must do), an integration constant of 1/2 must be added. This gives for the CDF of Voigt:
If the Gaussian profile is centered at μ G {\displaystyle \mu _{G}} and the Lorentzian profile is centered at μ L {\displaystyle \mu _{L}} , the convolution is centered at μ V = μ G + μ L {\displaystyle \mu _{V}=\mu _{G}+\mu _{L}} and the characteristic function is:
The probability density function is simply offset from the centered profile by μ V {\displaystyle \mu _{V}} :
where:
The mode and median are both located at μ V {\displaystyle \mu _{V}} .
Using the definition above for z {\displaystyle z} and x c = x − μ V {\displaystyle x_{c}=x-\mu _{V}} , the first and second derivatives can be expressed in terms of the Faddeeva function as
and
respectively.
Often, one or multiple Voigt profiles and/or their respective derivatives need to be fitted to a measured signal by means of non-linear least squares , e.g., in spectroscopy . Then, further partial derivatives can be utilised to accelerate computations. Instead of approximating the Jacobian matrix with respect to the parameters μ V {\displaystyle \mu _{V}} , σ {\displaystyle \sigma } , and γ {\displaystyle \gamma } with the aid of finite differences , the corresponding analytical expressions can be applied. With Re [ w ( z ) ] = ℜ w {\displaystyle \operatorname {Re} \left[w(z)\right]=\Re _{w}} and Im [ w ( z ) ] = ℑ w {\displaystyle \operatorname {Im} \left[w(z)\right]=\Im _{w}} , these are given by:
for the original voigt profile V {\displaystyle V} ;
for the first order partial derivative V ′ = ∂ V ∂ x {\displaystyle V'={\frac {\partial V}{\partial x}}} ; and
for the second order partial derivative V ″ = ∂ 2 V ( ∂ x ) 2 {\displaystyle V''={\frac {\partial ^{2}V}{\left(\partial x\right)^{2}}}} . Since μ V {\displaystyle \mu _{V}} and γ {\displaystyle \gamma } play a relatively similar role in the calculation of z {\displaystyle z} , their respective partial derivatives also look quite similar in terms of their structure, although they result in totally different derivative profiles. Indeed, the partial derivatives with respect to σ {\displaystyle \sigma } and γ {\displaystyle \gamma } show more similarity since both are width parameters. All these derivatives involve only simple operations (multiplications and additions) because the computationally expensive ℜ w {\displaystyle \Re _{w}} and ℑ w {\displaystyle \Im _{w}} are readily obtained when computing w ( z ) {\displaystyle w\left(z\right)} . Such a reuse of previous calculations allows for a derivation at minimum costs. This is not the case for finite difference gradient approximation as it requires the evaluation of w ( z ) {\displaystyle w\left(z\right)} for each gradient respectively.
The Voigt functions [ 1 ] U , V , and H (sometimes called the line broadening function ) are defined by
where
erfc is the complementary error function , and w ( z ) is the Faddeeva function .
with Gaussian sigma relative variables u = x 2 σ {\displaystyle u={\frac {x}{{\sqrt {2}}\,\sigma }}} and a = γ 2 σ . {\displaystyle a={\frac {\gamma }{{\sqrt {2}}\,\sigma }}.}
The Tepper-García function , named after Mexican-born, German-Australian Astrophysicist Thor Tepper-García , is a combination of an exponential function and rational functions that approximates the line broadening function H ( a , u ) {\displaystyle H(a,u)} over a wide range of its parameters. [ 2 ] It is obtained from a truncated power series expansion of the exact line broadening function.
In its most computationally efficient form, the Tepper-García function can be expressed as
where P ≡ u 2 {\displaystyle P\equiv u^{2}} , Q ≡ 3 / ( 2 P ) {\displaystyle Q\equiv 3/(2P)} , and R ≡ e − P {\displaystyle R\equiv e^{-P}} .
Thus the line broadening function can be viewed, to first order, as a pure Gaussian function plus a correction factor that depends linearly on the microscopic properties of the absorbing medium (encoded in a {\displaystyle a} ); however, as a result of the early truncation in the series expansion, the error in the approximation is still of order a {\displaystyle a} , i.e. H ( a , u ) ≈ T ( a , u ) + O ( a ) {\displaystyle H(a,u)\approx T(a,u)+{\mathcal {O}}(a)} . This approximation has a relative accuracy of
over the full wavelength range of H ( a , u ) {\displaystyle H(a,u)} , provided that a ≲ 10 − 4 {\displaystyle a\lesssim 10^{-4}} .
In addition to its high accuracy, the function T ( a , u ) {\displaystyle T(a,u)} is easy to implement as well as computationally fast. It is widely used in the field of quasar absorption line analysis. [ 3 ]
The pseudo-Voigt profile (or pseudo-Voigt function ) is an approximation of the Voigt profile V ( x ) using a linear combination of a Gaussian curve G ( x ) and a Lorentzian curve L ( x ) instead of their convolution .
The pseudo-Voigt function is often used for calculations of experimental spectral line shapes .
The mathematical definition of the normalized pseudo-Voigt profile is given by
η {\displaystyle \eta } is a function of full width at half maximum (FWHM) parameter.
There are several possible choices for the η {\displaystyle \eta } parameter. [ 4 ] [ 5 ] [ 6 ] [ 7 ] A simple formula, accurate to 1%, is [ 8 ] [ 9 ]
where now, η {\displaystyle \eta } is a function of Lorentz ( f L {\displaystyle f_{L}} ), Gaussian ( f G {\displaystyle f_{G}} ) and total ( f {\displaystyle f} ) Full width at half maximum (FWHM) parameters. The total FWHM ( f {\displaystyle f} ) parameter is described by:
The full width at half maximum (FWHM) of the Voigt profile can be found from the
widths of the associated Gaussian and Lorentzian widths. The FWHM of the Gaussian profile
is
The FWHM of the Lorentzian profile is
An approximate relation (accurate to within about 1.2%) between the widths of the Voigt, Gaussian, and Lorentzian profiles is: [ 10 ]
By construction, this expression is exact for a pure Gaussian or Lorentzian.
A better approximation with an accuracy of 0.02% is given by [ 11 ] (originally found by Kielkopf [ 12 ] )
Again, this expression is exact for a pure Gaussian or Lorentzian.
In the same publication, [ 11 ] a slightly more precise (within 0.012%), yet significantly more complicated expression can be found.
The asymmetry pseudo-Voigt (Martinelli) function resembles a split normal distribution by having different widths on each side of the peak position. Mathematically this is expressed as:
V p ( x , f ) = η ⋅ L ( x , f ) + ( 1 − η ) ⋅ G ( x , f ) {\displaystyle V_{p}(x,f)=\eta \cdot L(x,f)+(1-\eta )\cdot G(x,f)}
with 0 < η < 1 {\displaystyle 0<\eta <1} being the weight of the Lorentzian and the width f {\displaystyle f} being a split function ( f = f 1 {\displaystyle f=f_{1}} for x < 0 {\displaystyle x<0} and f = f 2 {\displaystyle f=f_{2}} for x ≥ 0 {\displaystyle x\geq 0} ). In the limit f 1 → f 2 {\displaystyle f_{1}\rightarrow f_{2}} , the Martinelli function returns to a symmetry pseudo Voigt function. The Martinelli function has been used to model elastic scattering on resonant inelastic X-ray scattering instruments. [ 13 ]
|
https://en.wikipedia.org/wiki/Voigt_profile
|
The Voitenko compressor is a shaped charge adapted from its original purpose of piercing thick steel armour to the task of accelerating shock waves . It was proposed by Anatoly Emelyanovich Voitenko (Анатолий Емельянович Войтенко), a Soviet scientist, in 1964. [ 1 ] [ 2 ] It slightly resembles a wind tunnel .
The Voitenko compressor initially separates a test gas from a shaped charge with a malleable steel plate. When the shaped charge detonates, most of its energy is focused on the steel plate, driving it forward and pushing the test gas ahead of it. Ames Research Center translated this idea into a self-destroying shock tube. A 30-kilogram (66 lb) shaped charge accelerated the gas in a 3-cm glass-walled tube 2 meters in length. The velocity of the resulting shock wave was a phenomenal 67 km/s (220,000 ft/s). The apparatus exposed to the detonation was, of course, completely destroyed, but not before useful data was extracted. [ 3 ] [ 4 ] In a typical Voitenko compressor, a shaped charge accelerates hydrogen gas, which in turn accelerates a thin disk up to about 40 km/s. [ 5 ] A slight modification to the Voitenko compressor concept is a super-compressed detonation, [ 6 ] [ 7 ] a device that uses a compressible liquid or solid fuel in the steel compression chamber instead of a traditional gas mixture. [ 8 ] [ 9 ] A further extension of this technology is the explosive diamond anvil cell , [ 10 ] [ 11 ] [ 12 ] [ 13 ] utilizing multiple opposed shaped-charge jets projected at a single steel-encapsulated fuel, [ 14 ] such as hydrogen . The fuels used in these devices, along with the secondary combustion reactions and long blast impulse, produce similar conditions to those encountered in fuel-air and thermobaric explosives. [ 15 ] [ 16 ]
This method of detonation produces energies over 100 k eV (~10 9 K temperatures), suitable not only for nuclear fusion , but other higher-order quantum reactions as well. [ 17 ] [ 18 ] [ 19 ] [ 20 ] The UTIAS explosive-driven-implosion facility was used to produce stable, centered and focused hemispherical implosions to generate neutrons from D–D reactions. The simplest and most direct method proved to be in a predetonated stoichiometric mixture of deuterium and oxygen . The other successful method was using a miniature Voitenko-type compressor, where a plane diaphragm was driven by the implosion wave into a secondary small spherical cavity that contained pure deuterium gas at one atmosphere. [ 21 ] [ 22 ] In brief, PETN solid explosive is used to form a hemispherical shell (3–6 mm thick) in a 20-cm diameter hemispherical cavity milled in a massive steel chamber. The remaining volume is filled with a stoichiometric mixture of ( H 2 or D 2 and O 2 ). This mixture is detonated by a very short, thin exploding wire located at the geometric center. The arrival of the detonation wave at the spherical surface instantly and simultaneously fires the explosive liner. The detonation wave in the explosive liner hits the metal cavity, reflects, and implodes on the preheated burnt gases, focuses at the center of the hemisphere (50 microseconds after the initiation of the exploding wire) and reflects, leaving behind a very small pocket (1 mm) of extremely high-temperature, high-pressure and high-density plasma. [ 23 ] [ 24 ] [ 25 ]
|
https://en.wikipedia.org/wiki/Voitenko_compressor
|
Volanesorsen , sold under the brand name Waylivra , is a triglyceride-reducing drug . It is a second-generation [ 3 ] 2'- O -methoxyethyl (2'-MOE) chimeric antisense therapeutic oligonucleotide (ASO) [ 4 ] that targets the messenger RNA for apolipoprotein C 3 (apo-CIII).
The most common side effects include reduced platelet levels and reactions at the site of the injection such as pain, swelling, itching, or bruising. [ 5 ]
Volanesorsen, is an 'antisense oligonucleotide,' a very short piece of synthetic RNA (a type of genetic material). [ 5 ] It has been designed to block the production of a protein that slows down the breakdown of fats called apolipoprotein C-III. [ 5 ] By blocking the production of this protein, the medicine reduces the level of triglycerides in the blood and, as a result, fat accumulation in the body, which is expected to reduce the risk of pancreatitis. [ 5 ]
Familial chylomicronaemia syndrome (FCS) (also known as type I hyperlipoproteinaemia) is an inherited disease where people have abnormally high levels of some types of fat called triglycerides in their blood. [ 5 ] The excess fat accumulates in organs such as the spleen and liver, which become abnormally enlarged. [ 5 ] Fat accumulation can also cause repeated bouts of pancreatitis (inflammation of the pancreas) and xanthomas (formation of yellow fatty deposits just under the skin, generally around joints) [ 5 ]
Volanesorsen is indicated as an adjunct to diet in adults with genetically confirmed familial chylomicronemia syndrome (FCS) and at high risk for pancreatitis, in whom response to diet and triglyceride lowering therapy has been inadequate. [ 5 ]
It is in Phase III clinical trials for the treatment of hypertriglycidemia , [ when? ] familial chylomicronemia syndrome and familial partial lipodystrophy . [ 6 ] [ 7 ]
The drug was discovered and developed by Ionis Pharmaceuticals .
Volanesorsen was designated an orphan drug by the European Medicines Agency (EMA) in February 2014, for phosphorothioate oligonucleotide targeted to apolipoprotein C-III for treatment of familial chylomicronaemia syndrome. [ 8 ]
Volanesorsen was approved for medical use in the European Union in May 2019. [ 5 ]
Volanesorsen was effective in reducing triglycerides in the blood in a study of 67 participants with familial chylomicronemia syndrome (FCS). [ 5 ] After three months, participants given volanesorsen had an average 77% reduction in the level of triglycerides compared with an average 18% increase in participants given placebo (a dummy treatment). [ 5 ] All participants in the study were on a low-fat diet in addition to receiving volanesorsen or placebo. [ 5 ]
The complete sequence of volanesorsen is: [ 2 ] : 165–166
Volanesorsen is the International nonproprietary name (INN). [ 2 ]
|
https://en.wikipedia.org/wiki/Volanesorsen
|
Volatiles are the group of chemical elements and chemical compounds that can be readily vaporized . In contrast with volatiles, elements and compounds that are not readily vaporized are known as refractory substances.
On planet Earth, the term 'volatiles' often refers to the volatile components of magma . In astrogeology volatiles are investigated in the crust or atmosphere of a planet or moon. Volatiles include nitrogen , carbon dioxide , ammonia , hydrogen , methane , sulfur dioxide , water and others.
Planetary scientists often classify volatiles with exceptionally low melting points, such as hydrogen and helium , as gases, whereas those volatiles with melting points above about 100 K (–173 °C , –280 °F ) are referred to as ices. The terms "gas" and "ice" in this context can apply to compounds that may be solids, liquids or gases. Thus, Jupiter and Saturn are gas giants , and Uranus and Neptune are ice giants , even though the vast majority of the "gas" and "ice" in their interiors is a hot, highly dense fluid that gets denser as the center of the planet is approached, and in the case of Neptune , may reach temperatures of 5,100 °C. Inside of Jupiter's orbit, cometary activity is driven by the sublimation of water ice. Supervolatiles such as CO and CO 2 have generated cometary activity as far out as 25.8 AU (3.86 billion km). [ 1 ]
In igneous petrology the term more specifically refers to the volatile components of magma (mostly water vapor and carbon dioxide) that affect the appearance and explosivity of volcanoes . Volatiles in a magma with a high viscosity , generally felsic with a higher silica (SiO 2 ) content, tend to produce eruptions that are explosive eruption . Volatiles in a magma with a low viscosity, generally mafic with a lower silica content, tend to vent as effusive eruption and can give rise to a lava fountain .
Some volcanic eruptions are explosive because of the mixing between water and magma reaching the surface, which releases energy suddenly. However, in some cases, the eruption is caused by volatiles dissolved in the magma itself. [ 2 ] Approaching the surface, pressure decreases and the volatiles come out of solution, creating bubbles that circulate in the liquid . The bubbles become connected together, forming a network. This promotes the fragmentation into small drops or spray or coagulate clots in gas . [ 2 ]
Generally, 95-99% of magma is liquid rock. However, the small percentage of gas present represents a very large volume when it expands on reaching atmospheric pressure . Gas is thus important in a volcano system because it generates explosive eruptions. [ 2 ] Magma in the mantle and lower crust has a high volatile content. Water and carbon dioxide are not the only volatiles that volcanoes release; other volatiles include hydrogen sulfide and sulfur dioxide . Sulfur dioxide is common in basaltic and rhyolite rocks. Volcanoes also release a large amount of hydrogen chloride and hydrogen fluoride as volatiles. [ 2 ]
There are three main factors that affect the dispersion of volatiles in magma: confining pressure , composition of magma, temperature of magma. Pressure and composition are the most important parameters. [ 2 ] To understand how the magma behaves rising to the surface, the role of solubility within the magma must be known. An empirical law has been used for different magma-volatiles combination. For instance, for water in magma the equation is n=0.1078 P where n is the amount of dissolved gas as weight percentage (wt%), P is the pressure in megapascal (MPa) that acts on the magma. The value changes, for example for water in rhyolite n = 0.4111 P and for the carbon dioxide n = 0.0023 P. These simple equations work if there is only one volatile in a magma. However, in reality, the situation is not so simple because there are often multiple volatiles in a magma. It is a complex chemical interaction between different volatiles. [ 2 ]
Simplifying, the solubility of water in rhyolite and basalt is function of pressure and depth below the surface in absence of other volatiles. Both basalt and rhyolite lose water with decreasing pressure as the magma rises to the surface. The solubility of water is higher in rhyolite than in basaltic magma. Knowledge of the solubility allows the determination of the maximum amount of water that might be dissolved in relation with pressure. [ 2 ] If the magma contains less water than the maximum possible amount, it is undersaturated in water. Usually, insufficient water and carbon dioxide exist in the deep crust and mantle, so magma is often undersaturated in these conditions. Magma becomes saturated when it reaches the maximum amount of water that can be dissolved in it. If the magma continues to rise up to the surface and more water is dissolved, it becomes supersaturated . If more water is dissolved in magma, it can be ejected as bubbles or water vapor. This happens because pressure decreases in the process and velocity increases and the process has to balance also between decrease of solubility and pressure. [ 2 ] Making a comparison with the solubility of carbon dioxide in magma, this is considerably less than water and it tends to exsolve at greater depth. In this case water and carbon dioxide are considered independent. [ 2 ] What affects the behavior of the magmatic system is the depth at which carbon dioxide and water are released. Low solubility of carbon dioxide means that it starts to release bubbles before reaching the magma chamber. The magma is at this point already supersaturated. The magma enriched in carbon dioxide bubbles, rises up to the roof of the chamber and carbon dioxide tends to leak through cracks into the overlying caldera. [ 2 ] Basically, during an eruption the magma loses more carbon dioxide than water, that in the chamber is already supersaturated. Overall, water is the main volatile during an eruption. [ 2 ]
Bubble nucleation happens when a volatile becomes saturated . Actually, the bubbles are composed of molecules that tend to aggregate spontaneously in a process called homogeneous nucleation . The surface tension acts on the bubbles shrinking the surface and forces them back to the liquid. [ 2 ] The nucleation process is greater when the space to fit is irregular and the volatile molecules can ease the effect of surface tension. [ 2 ] The nucleation can occur thanks to the presence of solid crystals , which are stored in the magma chamber. They are perfect potential nucleation sites for bubbles. If there is no nucleation in the magma the bubbles formation might appear really late and magma becomes significantly supersaturated. The balance between supersaturation pressure and bubble's radii expressed by this equation: ∆P=2σ/r, where ∆P is 100 MPa and σ is the surface tension. [ 2 ] If the nucleation starts later when the magma is very supersaturated, the distance between bubbles becomes smaller. [ 2 ] Essentially if the magma rises rapidly to the surface, the system will be more out of equilibrium and supersaturated. When the magma rises there is competition between adding new molecules to the existing ones and creating new ones. The distance between molecules characterizes the efficiency of volatiles to aggregate to the new or existing site. Crystals inside magma can determine how bubbles grow and nucleate. [ 2 ]
|
https://en.wikipedia.org/wiki/Volatile_(astrogeology)
|
In chemistry, the terms volatile acid (or volatile fatty acid (VFA)) and volatile acidity (VA) are used somewhat differently in various application areas.
In wine chemistry , the volatile acids are those that can be separated from wine through steam distillation. [ 1 ] Many factors influence the level of VA, but the growth of spoilage bacteria and yeasts are the primary source and consequently VA is often used to quantify the degree of wine oxidation and spoilage. [ 2 ]
Acetic acid is the primary volatile acid in wine, but smaller amounts of lactic , formic , butyric , propionic acid , carbonic acid (from carbon dioxide), and sulfurous acid (from sulfur dioxide) may be present and contribute to VA; [ 2 ] [ 3 ] [ 4 ] in analysis, measures may be taken to exclude or correct for the VA due to carbonic, sulfuric, and sorbic acids. [ 1 ] [ 5 ] Other acids present in wine, including malic and tartaric acid are considered non-volatile or fixed acids . Together volatile and non-volatile acidity compromise total acidity . [ 1 ]
Classical analysis for VA involves distillation in a Cash or Markham still, followed by titration with standardized sodium hydroxide , and reporting of the results as acetic acid. [ 1 ] [ 6 ] [ 7 ] Several alternatives to the classical analysis have been developed.
While VA is typically considered a wine flaw or fault , winemakers may intentionally allow a small amount of VA in their product for its contribution to the wine's sensory complexity. [ 3 ] Excess VA is difficult for winemakers to correct. [ 1 ] In some countries, including the United States, European Union, and Australia, the law sets a limit on the level of allowable VA. [ 1 ] [ 7 ] [ 8 ]
In wastewater treatment , the volatile acids are the short chain fatty acids (1-6 carbon atoms) that are water soluble and can be steam distilled at atmospheric pressure - primarily acetic, proprionic, and butyric acid. [ 9 ] These acids are produced during anaerobic digestion . [ 10 ] [ 11 ] In a well functioning digester, the volatile acids will be consumed by the methane forming bacteria. [ 12 ] Volatile acid/ alkalinity ratio is often measured as one indicator of a digester's condition. [ 13 ] The acceptable level of volatile fatty acids in environmental waters is up to 50,000 ppm. [ 14 ]
Volatile fatty acids can be analyzed by titration , distillation , steam distillation , or chromatography . [ 15 ] Titration provides approximate but relatively quick results; it is widely used by wastewater treatment plants to track a status of a digestor. Distillation similarly is used in wastewater treatment plants and produces approximate results; 15-32% of the VFAs are lost during distillation.
Steam distillation can recover 92-98% of a samples VFA. This method is more precise than previous two methods, but requires about 4 hours to complete.
Chromatography gives the most precise and accurate results. It is capable of qualitatively and quantitatively analyzing each individual VFA.
In physiology, volatile acid (or respiratory acid ) refers to carbonic acid, a product of dissolved carbon dioxide. In this context, volatile indicates that it can be expelled as a gas through the lungs. [ 16 ] [ 17 ] Carbonic acid is the only physiologically volatile acid; all other acids are physiologically nonvolatile acids (also known as a fixed or metabolic acids ). Volatile acid results from the aerobic oxidation of substances such as carbohydrates and fatty acids. [ 18 ]
Volatile acid concentration can be used to detect adulteration of butter with less expensive fats. Butterfat has uncommonly high levels of volatile butyric and caproic acids, and mixing with fats from other sources dilutes the volatile acids. A measurement of the volatile acids is known as the Reichert Meissel value . [ 19 ] [ 20 ] [ 21 ]
In digestion, volatile acids or volatile fatty acids are short chain fatty acids . They are especially important in the digestion of ruminant animals, where they result from the action of rumen flora, and are absorbed as an energy source by the animal.
In workplace air samples, concentrations of hydrochloric , hydrobromic , and nitric acid may be monitored as hazardous volatile acids. [ 22 ]
|
https://en.wikipedia.org/wiki/Volatile_acid
|
A volatile corrosion inhibitor (VCI) is a material that protects metals from corrosion . Corrosion inhibitors are chemical compounds that can decrease the corrosion rate of a material, typically a metal or an alloy . NACE International Standard TM0208 defines volatile corrosion inhibitor (VCI) as a chemical substance that acts to reduce corrosion by a combination of volatilization from a VCI material, vapor transport in the atmosphere of an enclosed environment, and condensation onto surface in the space, including absorption , dissolution , and hydrophobic effects on metal surfaces, where the rate of corrosion of metal surfaces is thereby inhibited. They are also called vapor-phase inhibitors, vapor-phase corrosion inhibitors, and vapor-transported corrosion inhibitors.
VCIs come in various formulations that are dependent on the type of system they will be used in; for example, films, oils , coatings , cleaners , etc. There are also variety of formulations that provide protection in ferrous , nonferrous , or multi-metal applications. Other variables include the amount of vapor phase compared to contact phase inhibitors. [ 1 ] Because they are volatile at ambient temperature , VCI compounds can reach inaccessible crevices in metallic structures. [ 2 ]
V.VCI is also called Vacuum VCI meaning they have special properties of performance in vacuum as well as corrosion protection properties. [ citation needed ]
The first widescale use of VCIs can be traced to Shell 's patent for dicyclohexylammonium nitrite (DICHAN), which was eventually commercialized as VPI 260. [ 3 ] DICHAN was used extensively by the US military to protect a wide variety of metallic components from corrosion via various delivery systems, VCI powder, VCI paper, VCI solution, VCI slushing compound, etc.
Safety and health concerns as well as inherent limitations has led to the abandonment of DICHAN as a VCI. [ 4 ] At present, commercial VCI compounds are typically salts of moderately strong bases and weak volatile acids. The typical bases are amines and the acids are carbonic, nitrous and carboxylic. [ 5 ]
For steel , the first step will be the volatilization of the inhibitor into the airspace. This may entail simple evolution of the molecule or the chemical may dissociate first and then volatilize. [ 6 ] The molecules will then diffuse through the enclosed airspace until some of the molecules reach the metallic surface to be protected. There are two likely paths once the molecules reach the metallic surface. First the molecule may adsorb onto the metal surface thereby forming a barrier to aggressive ions and displacing any condensed water. [ 6 ] [ 7 ]
The second path involves the condensed water layer that has been shown to exist on the metallic surface. [ 8 ] The VCI molecules will dissolve into the condensed water layer, raising the pH . An alkaline pH has been shown to have a beneficial effect on the corrosion resistance for steel. [ 6 ]
The mechanism for copper begins the same as for steel, evolution of the inhibitor. Once at the copper surface however, the inhibitor will form a copper benzotriazole complex which is protective. [ 9 ]
Vapor pressure is a critical parameter in VCI effectiveness. The most favorable range of pressure is 10 −3 to 10 −2 Pa at room temperature. Insufficient pressure leads to the slow establishment of the protective layer; if the pressure is too high, VCI effectiveness is limited to a short time. [ 10 ] [ 11 ]
VCIs have been applied across a wide variety of application areas:
Packaging – One of the first widespread uses for VCIs was VCI paper which was used to wrap parts for transportation and/or storage. The technology then evolved with the development of VCI film, where the inhibitor was incorporated into Polyethylene film. [ 8 ] This offered the advantage that parts could be stored in the VCI film without any rust -preventative (RP) oil, which would typically have to be removed before part was placed into service. In places where the VCI film is in direct contact with the metal, VCI molecules adsorb on the metal surfaces, creating an invisible molecular barrier against corrosive elements such as oxygen, moisture, and chlorides. As VCI molecules vaporize out of the film and diffuse throughout the package, they also form a protective molecular layer on metal surfaces not in direct contact with the film. When the packaging is removed, the VCI molecules simply vaporize and float away. [ 12 ] VCI films protect metals both through direct contact and vapor action. Large Equipment/Assets are wrapped in VCI heat shrinkable film for long term outdoor storage. The use of polymer films for thorough protection of electronic equipment during shipment or storage should take into account the prevention of electrostatic discharge (ESD), corrosion, and the disposal of the film after use. A main property that makes a polymer film a viable packaging material for electronic equipment is the film's ability to eliminate electrostatic discharge. The most recent property addition to VCI film is biodegradability . [ 12 ]
Coatings - The use of VCIs as alternative corrosion inhibitor technologies in coating is not a new concept. In the last few years, however, with growing environmental pressure to reduce the use of traditional inhibitors containing heavy metals , they have gained in popularity. Since VCI particles have a polar attraction to the metal substrate, this allows them to work in the coating without negatively impacting other components of the coating, such as defoamers, wetting agents, levelling agents, etc. VCIs are typically added to the formulation in very small amounts by weight of the overall formula. The particle size of the VCIs is very small in comparison to traditionally used inhibitors. This allows the VCIs to migrate into the smaller voids more effectively. Once the VCIs have adsorbed on the surface of the metal, they provide an effective barrier that is hydrophobic and prevents moisture from getting through to the metal surface. Consequently, this prevents the formation of a corrosion cell and renders the moisture ineffective. [ 13 ]
Emitter – VCI in the form of a capsule, foam, cup, etc., is placed within an electrical cabinet , junction box , etc., to provide corrosion protection to the various components inside the box. VCI emitters also provide best protection against H2S, SO2 , ammonia & humidity, It is mostly use in electrical components because it does not affect electrical, surface or optical properties.
Pipe casings – A mixture of VCI and a swellable gel is injected into the annular space between the pipe casing (the outer pipe) and the carrier pipe (the inner pipe) as to provide corrosion protection to the carrier pipe. This application has recently been of wider interest as it has been approved by PHMSA as a means to address a shorted casing in a CP protected pipeline. (PHMSA rules dictate that a shorted casing on a PHMSA regulated pipeline be repaired or treated). Details can also be found in NACE SP-200. [ 14 ]
Pipeline preservation (internal) - VCIs are seeing widespread application for the mitigation of corrosion of the internal surfaces of new and/or existing out-of-service pipelines. [ 9 ] Top-of-the-line TOL corrosion typically occurs in wet gas pipelines that have a stratified flow regime and poor thermal insulation . TOL corrosion is predominantly a problem of protection in the gas phase. [ 15 ] Tests showed that the best potential for providing corrosion protection for TOL came from azoles , certain acetylene alcohols, and a "green" volatile aldehyde . [ 16 ]
For new pipelines, the time period between hydrotesting and operations can be very unpredictable and may extend for months. Historical data has shown that significant corrosion issues can arise as a result of residual hydrotest water. [ 14 ] For a piggable pipeline, an aqueous solution of VCI is pushed down the pipeline between two pigs after completion of the hydrotest operation. This provides corrosion mitigation until the line is put into service. [ 14 ] For a non-piggable pipeline, the low sections where residual hydrotest water may collect after draining are identified and an aqueous VCI solution is added at nearby high points such that the inhibitor solution will flow into the low sections, thereby treating the residual water with inhibitor. [ 14 ]
For pipeline sections that are being idled, the low-lying sections are identified, and an inhibitor solution is added at nearby high points as to fill the low-lying section to a predetermined depth. [ 14 ]
Aboveground storage tanks (Soilside Bottom) - The bottoms of aboveground storage tanks are typically coated on the inside (product side) to prevent corrosion. The other side of the bottom, (soilside) is not coated and the unprotected steel rests directly on a foundation. There are various styles of foundations: a concrete ringwall with a sand bed and a liner, a hard pad, such as concrete or asphalt, a double bottom and finally simple soil. [ 14 ] VCIs are applied via various methods depending the tank foundation.
For tanks with a concrete ringwall, a sand bed and a liner, the VCI is typically installed as an aqueous solution. The solution is either injected at minimal pressure through the leak detection ports, (distribution of the solution through the sand is primarily via capillary action) or through a preinstalled distribution system of perforated pipes. [ 17 ] The tank can be in or out of service.
Various options are available for a tank on a hard pad depending on whether the tank is in or out of service. For a tank that is in service, a ring of perforated pipes is installed at the edge of the chime sealed via a membrane that creates an enclosed space between the tank chime and the hard pad foundation. The VCI is supplied as a powder in mesh sleeves that are threaded into the perforated pipes. Upon depletion of the VCI, the mesh sleeves are removed, and new sleeves installed. [ 18 ] For a tank that is out of service with the floor removed, grooves are cut into the hard pad. A channel is also cut from the end of the groove to extend beyond the tank chime. Perforated pipe with a mesh cover is laid at the bottom of the cut grooves. The groove is then filled with sand. The tank bottom is then installed as normal. The VCI is supplied as a powder in mesh sleeves that are installed into the perforated pipe. The ends of the perforated pipes are sealed closed. Upon depletion of the VCI, the mesh sleeves are removed, and new sleeves installed. [ 19 ] For a tank that is out of service without the floor removed, the typical approach is to inject the VCI as an aqueous solution through ports that have been installed through the floor which often are the helium ports that were used to verify the tank floor integrity. [ 18 ]
There are two typical geometries for double bottom tank. In the first, the space between the two floors has a liner and a sand bed and for the second, a liner and a concrete pad with radial slots. (This style of double bottom is often called an El Segundo double bottom). For a double bottom with a liner and sand bed, the VCI is supplied as an aqueous solution which is injected through the leak detection ports. For an El Segundo bottom that is in service, the VCI is again supplied as an aqueous solution that is injected through the leak detection ports. The ports are sealed closed and the solution is allowed to stand for a short period of time. The ports are then opened and the VCI solution is drained leaving a residual amount of the VCI solution within the space. This residual VCI provides the corrosion protection for the space. For an El Segundo bottom that is out of service, perforated pipes are installed into the grooves in the concrete that have leak detection ports. Mesh sleeves containing inhibitor powder is inserted into the perforated pipes and the leak detection ports are closed.
Aboveground storage tanks (Roofs) – The environment in the headspace of an aboveground storage tank can be very aggressive especially for tanks storing crude oil . The environment is aggressive as a result of the acidic species that are typically found in crude oil, ( sour crude ). Corrosion protection is supplied via a system of dispensers that have been attached to ports that have been installed on the tank roof. (Ports and shut-off valves are installed when the tank is out of service). Bottles containing the VCI are placed in the dispenser and the shut off valves are opened. The VCI has a high vapor pressure such that the inhibitor will saturate the airspace within the dispenser and then will diffuse through the open port into the storage tank headspace. [ 20 ] [ 21 ]
Oils - The most common use of VCIs in oils is for the protection of oil containing systems like an engine or hydraulics during intermittent use or during longer-term storage (mothballing). The VCI treated oil is typically added to the existing oil and the unit is run to fully circulate the treated oil throughout the system. The system is then shut off for storage. The VCI treated oil can also be fogged into void spaces within a system or enclosed space. [ 21 ]
Interior of large enclosed spaces – VCIs have been used to protect the interior of equipment such as tanks, vessels, boilers , piping, heat exchangers , etc., especially for voids and/or recessed areas of interior cavities during storage and/or transportation. The typical means are fogging/blowing the VCI powder into the interior space or applying the VCI powder in packet form. For smaller volumes, the packets are simply distributed within the space. For larger volumes, the packets are attached to leads that are then hung at the perimeter of the space. [ 22 ]
Water treatment – Aqueous VCI solutions have been used to flush/rinse pipelines, pumps, manifolds, enclosed pits, heat exchangers, etc. as preparation for mothballing/storage.
Specialty covers – VCI film covers have been used to protect flanges , valves , etc. in harsh environments such as chemical processing plants, offshore platforms, etc. [ 23 ]
|
https://en.wikipedia.org/wiki/Volatile_corrosion_inhibitor
|
Volatile organic compounds ( VOCs ) are organic compounds that have a high vapor pressure at room temperature . [ 1 ] They are common and exist in a variety of settings and products, not limited to house mold , upholstered furniture , arts and crafts supplies, dry cleaned clothing, and cleaning supplies . [ 2 ] VOCs are responsible for the odor of scents and perfumes as well as pollutants . They play an important role in communication between animals and plants, such as attractants for pollinators, protection from predation, and even inter-plant interactions. [ 3 ] [ 4 ] [ 5 ] Some VOCs are dangerous to human health or cause harm to the environment , often despite the odor being perceived as pleasant, such as " new car smell ". [ 6 ]
Anthropogenic VOCs are regulated by law, especially indoors, where concentrations are the highest. Most VOCs are not acutely toxic , but may have long-term chronic health effects. Some VOCs have been used in pharmaceutical settings , while others are the target of administrative controls because of their recreational use . The high vapor pressure of VOCs correlates with a low boiling point , which relates to the number of the sample's molecules in the surrounding air, a trait known as volatility . [ 7 ]
Diverse definitions of the term VOC are in use. Some examples are presented below.
Health Canada classifies VOCs as organic compounds that have boiling points roughly in the range of 50 to 250 °C (122 to 482 °F). The emphasis is placed on commonly encountered VOCs that would have an effect on air quality. [ 8 ]
The European Union defines a VOC as "any organic compound as well as the fraction of creosote , having at 293.15 K a vapour pressure of 0.01 kPa or more, or having a corresponding volatility under the particular conditions of use;". [ 9 ] The VOC Solvents Emissions Directive was the main policy instrument for the reduction of industrial emissions of volatile organic compounds (VOCs) in the European Union. It covers a wide range of solvent-using activities, e.g. printing, surface cleaning, vehicle coating, dry cleaning and manufacture of footwear and pharmaceutical products. The VOC Solvents Emissions Directive requires installations in which such activities are applied to comply either with the emission limit values set out in the Directive or with the requirements of the so-called reduction scheme. Article 13 of The Paints Directive, approved in 2004, amended the original VOC Solvents Emissions Directive and limits the use of organic solvents in decorative paints and varnishes and in vehicle finishing products. The Paints Directive sets out maximum VOC content limit values for paints and varnishes in certain applications. [ 10 ] [ 11 ] The Solvents Emissions Directive was replaced by the Industrial Emissions Directive from 2013.
The People's Republic of China defines a VOC as those compounds that have "originated from automobiles, industrial production and civilian use, burning of all types of fuels, storage and transportation of oils, fitment finish, coating for furniture and machines, cooking oil fume and fine particles (PM 2.5)", and similar sources. [ 12 ] The Three-Year Action Plan for Winning the Blue Sky Defence War released by the State Council in July 2018 creates an action plan to reduce 2015 VOC emissions 10% by 2020. [ 13 ]
The Central Pollution Control Board of India released the Air (Prevention and Control of Pollution) Act in 1981, amended in 1987, to address concerns about air pollution in India . [ 14 ] While the document does not differentiate between VOCs and other air pollutants, the CPCB monitors "oxides of nitrogen (NO x ), sulphur dioxide (SO 2 ), fine particulate matter (PM10) and suspended particulate matter (SPM)". [ 15 ]
The definitions of VOCs used for control of precursors of photochemical smog used by the U.S. Environmental Protection Agency (EPA) and state agencies in the US with independent outdoor air pollution regulations include exemptions for VOCs that are determined to be non-reactive, or of low-reactivity in the smog formation process. Prominent is the VOC regulation issued by the South Coast Air Quality Management District in California and by the California Air Resources Board (CARB). [ 17 ] However, this specific use of the term VOCs can be misleading, especially when applied to indoor air quality because many chemicals that are not regulated as outdoor air pollution can still be important for indoor air pollution.
Following a public hearing in September 1995, California's ARB uses the term "reactive organic gases" (ROG) to measure organic gases. The CARB revised the definition of "Volatile Organic Compounds" used in their consumer products regulations, based on the committee's findings. [ 18 ]
In addition to drinking water , VOCs are regulated in pollutant discharges to surface waters (both directly and via sewage treatment plants) [ 19 ] as hazardous waste, [ 20 ] but not in non-industrial indoor air. [ 21 ] The Occupational Safety and Health Administration (OSHA) regulates VOC exposure in the workplace. Volatile organic compounds that are classified as hazardous materials are regulated by the Pipeline and Hazardous Materials Safety Administration while being transported.
Most VOCs in Earth's atmosphere are biogenic, largely emitted by plants. [ 7 ]
Biogenic volatile organic compounds (BVOCs) encompass VOCs emitted by plants, animals, or microorganisms, and while extremely diverse, are most commonly terpenoids , alcohols, and carbonyls (methane and carbon monoxide are generally not considered). [ 23 ] Not counting methane , biological sources emit an estimated 760 teragrams of carbon per year in the form of VOCs. [ 22 ] The majority of VOCs are produced by plants, the main compound being isoprene . Small amounts of VOCs are produced by animals and microbes. [ 24 ] Many VOCs are considered secondary metabolites , which often help organisms in defense, such as plant defense against herbivory . The strong odor emitted by many plants consists of green leaf volatiles , a subset of VOCs. Although intended for nearby organisms to detect and respond to, these volatiles can be detected and communicated through wireless electronic transmission, by embedding nanosensors and infrared transmitters into the plant materials themselves. [ 25 ]
Emissions are affected by a variety of factors, such as temperature, which determines rates of volatilization and growth, and sunlight, which determines rates of biosynthesis . Emission occurs almost exclusively from the leaves, the stomata in particular. VOCs emitted by terrestrial forests are often oxidized by hydroxyl radicals in the atmosphere; in the absence of NO x pollutants, VOC photochemistry recycles hydroxyl radicals to create a sustainable biosphere–atmosphere balance. [ 26 ] Due to recent climate change developments, such as warming and greater UV radiation, BVOC emissions from plants are generally predicted to increase, thus upsetting the biosphere–atmosphere interaction and damaging major ecosystems. [ 27 ] A major class of VOCs is the terpene class of compounds, such as myrcene . [ 28 ]
Providing a sense of scale, a forest 62,000 square kilometres (24,000 sq mi) in area, the size of the U.S. state of Pennsylvania , is estimated to emit 3.4 million kg (7.5 million lb) of terpenes on a typical August day during the growing season. [ 29 ] Maize produces the VOC (Z)-3-hexen-1-ol and other plant hormones. [ 30 ]
Anthropogenic sources emit about 142 teragrams (1.42 × 10 11 kg, or 142 billion kg) of carbon per year in the form of VOCs. [ 31 ]
The major source of man-made VOCs are: [ 32 ]
Due to their numerous sources indoors, concentrations of VOCs indoors are consistently higher in indoor air (up to ten times higher) than outdoors due to the many sources. [ 35 ] VOCs are emitted by thousands of indoor products. Examples include: paints, varnishes, waxes and lacquers, paint strippers, cleaning and personal care products, pesticides, building materials and furnishings, office equipment such as copiers and printers, correction fluids and carbonless copy paper , graphics and craft materials including glues and adhesives, permanent markers, and photographic solutions. [ 36 ] Human activities such as cooking and cleaning can also emit VOCs. [ 37 ] [ 38 ] Cooking can release long-chain aldehydes and alkanes when oil is heated and terpenes can be released when spices are prepared and/or cooked. [ 37 ] Cleaning products contain a range of VOCs, including monoterpenes , sesquiterpenes , alcohols and esters . Once released into the air, VOCs can undergo reactions with ozone and hydroxyl radicals to produce other VOCs, such as formaldehyde. [ 38 ]
Some VOCs are emitted directly indoors, and some are formed through the subsequent chemical reactions. [ 39 ] [ 40 ] The total concentration of all VOCs (TVOC) indoors can be up to five times higher than that of outdoor levels. [ 41 ]
New buildings experience particularly high levels of VOC off-gassing indoors because of the abundant new materials (building materials, fittings, surface coverings and treatments such as glues, paints and sealants) exposed to the indoor air, emitting multiple VOC gases. [ 42 ] This off-gassing has a multi-exponential decay trend that is discernible over at least two years, with the most volatile compounds decaying with a time-constant of a few days, and the least volatile compounds decaying with a time-constant of a few years. [ 43 ]
New buildings may require intensive ventilation for the first few months, or a bake-out treatment. Existing buildings may be replenished with new VOC sources, such as new furniture, consumer products, and redecoration of indoor surfaces, all of which lead to a continuous background emission of TVOCs, and requiring improved ventilation. [ 42 ]
There are strong seasonal variations in indoors VOC emissions, with emission rates increasing in summer. This is largely due to the rate of diffusion of VOC species through materials to the surface, increasing with temperature. This leads to generally higher concentrations of TVOCs indoors in summer. [ 43 ]
Measurement of VOCs from the indoor air is done with sorption tubes e. g. Tenax (for VOCs and SVOCs) or DNPH -cartridges (for carbonyl-compounds) or air detector. The VOCs adsorb on these materials and are afterwards desorbed either thermally (Tenax) or by elution (DNPH) and then analyzed by GC–MS / FID or HPLC . Reference gas mixtures are required for quality control of these VOC measurements. [ 44 ] Furthermore, VOC emitting products used indoors, e.g. building products and furniture, are investigated in emission test chambers under controlled climatic conditions. [ 45 ] For quality control of these measurements round robin tests are carried out, therefore reproducibly emitting reference materials are ideally required. [ 44 ] Other methods have used proprietary Silcosteel-coated canisters with constant flow inlets to collect samples over several days. [ 46 ] These methods are not limited by the adsorbing properties of materials like Tenax.
In most countries, a separate definition of VOCs is used with regard to indoor air quality that comprises each organic chemical compound that can be measured as follows: adsorption from air on Tenax TA, thermal desorption, gas chromatographic separation over a 100% nonpolar column ( dimethylpolysiloxane ). VOC (volatile organic compounds) are all compounds that appear in the gas chromatogram between and including n -hexane and n -hexadecane . Compounds appearing earlier are called VVOC (very volatile organic compounds); compounds appearing later are called SVOC (semi-volatile organic compounds).
France , Germany (AgBB/DIBt), Belgium , Norway (TEK regulation) and Italy (CAM Edilizia) have enacted regulations to limit VOC emissions from commercial products. European industry has developed numerous voluntary ecolabels and rating systems, such as EMICODE , [ 47 ] M1, [ 48 ] Blue Angel , [ 49 ] GuT (textile floor coverings), [ 50 ] Nordic Swan Ecolabel, [ 51 ] EU Ecolabel , [ 52 ] and Indoor Air Comfort . [ 53 ] In the United States , several standards exist; California Standard CDPH Section 01350 [ 54 ] is the most common one. These regulations and standards changed the marketplace, leading to an increasing number of low-emitting products.
Respiratory , allergic , or immune effects in infants or children are associated with man-made VOCs and other indoor or outdoor air pollutants. [ 55 ]
Some VOCs, such as styrene and limonene , can react with nitrogen oxides or with ozone to produce new oxidation products and secondary aerosols , which can cause sensory irritation symptoms. [ 56 ] VOCs contribute to the formation of tropospheric ozone and smog . [ 57 ] [ 58 ]
Health effects include eye, nose, and throat irritation ; headaches , loss of coordination, nausea , hearing disorders [ 59 ] and damage to the liver , kidney, and central nervous system . [ 60 ] Some VOCs are suspected or known to cause cancer in humans. Key signs or symptoms associated with exposure to VOCs include conjunctival irritation, nose and throat discomfort, headache, allergic skin reaction, dyspnea , declines in serum cholinesterase levels, nausea, vomiting, nose bleeding, fatigue, dizziness. [ 61 ]
The ability of organic chemicals to cause health effects varies greatly from those that are highly toxic to those with no known health effects. As with other pollutants, the extent and nature of the health effect will depend on many factors including level of exposure and length of time exposed. Eye and respiratory tract irritation, headaches, dizziness, visual disorders, and memory impairment are among the immediate symptoms that some people have experienced soon after exposure to some organics. At present, not much is known about what health effects occur from the levels of organics usually found in homes. [ 62 ]
While null in comparison to the concentrations found in indoor air, benzene , toluene , and methyl tert-butyl ether (MTBE) were found in samples of human milk and increase the concentrations of VOCs that we are exposed to throughout the day. [ 63 ] A study notes the difference between VOCs in alveolar breath and inspired air suggesting that VOCs are ingested, metabolized, and excreted via the extra-pulmonary pathway. [ 64 ] VOCs are also ingested by drinking water in varying concentrations. Some VOC concentrations were over the EPA's National Primary Drinking Water Regulations and China's National Drinking Water Standards set by the Ministry of Ecology and Environment . [ 65 ]
The presence of VOCs in the air and in groundwater has prompted more studies. Several studies have been performed to measure the effects of dermal absorption of specific VOCs. Dermal exposure to VOCs like formaldehyde and toluene downregulate antimicrobial peptides on the skin like cathelicidin LL-37, human β-defensin 2 and 3. [ 66 ] Xylene and formaldehyde worsen allergic inflammation in animal models. [ 67 ] Toluene also increases the dysregulation of filaggrin : a key protein in dermal regulation. [ 68 ] this was confirmed by immunofluorescence to confirm protein loss and western blotting to confirm mRNA loss. These experiments were done on human skin samples. Toluene exposure also decreased the water in the trans-epidermal layer allowing for vulnerability in the skin's layers. [ 66 ] [ 69 ]
Limit values for VOC emissions into indoor air are published by AgBB , [ 70 ] AFSSET , California Department of Public Health , and others. These regulations have prompted several companies in the paint and adhesive industries to adapt with VOC level reductions their products. [ citation needed ] VOC labels and certification programs may not properly assess all of the VOCs emitted from the product, including some chemical compounds that may be relevant for indoor air quality. [ 71 ] Each ounce of colorant added to tint paint may contain between 5 and 20 grams of VOCs. A dark color, however, could require 5–15 ounces of colorant, adding up to 300 or more grams of VOCs per gallon of paint. [ 72 ]
VOCs are also found in hospital and health care environments. In these settings, these chemicals are widely used for cleaning, disinfection, and hygiene of the different areas. [ 73 ] Thus, health professionals such as nurses, doctors, sanitation staff, etc., may present with adverse health effects such as asthma ; however, further evaluation is required to determine the exact levels and determinants that influence the exposure to these compounds. [ 73 ] [ 74 ] [ 75 ]
Concentration levels of individual VOCs such as halogenated and aromatic hydrocarbons vary substantially between areas of the same hospital. Generally, ethanol , isopropanol , ether , and acetone are the main compounds in the interior of the site. [ 76 ] [ 77 ] Following the same line, in a study conducted in the United States, it was established that nursing assistants are the most exposed to compounds such as ethanol, while medical equipment preparers are most exposed to 2-propanol . [ 76 ] [ 77 ]
In relation to exposure to VOCs by cleaning and hygiene personnel, a study conducted in 4 hospitals in the United States established that sterilization and disinfection workers are linked to exposures to d-limonene and 2-propanol, while those responsible for cleaning with chlorine-containing products are more likely to have higher levels of exposure to α-pinene and chloroform . [ 75 ] Those who perform floor and other surface cleaning tasks (e.g., floor waxing) and who use quaternary ammonium, alcohol, and chlorine-based products are associated with a higher VOC exposure than the two previous groups, that is, they are particularly linked to exposure to acetone, chloroform, α-pinene, 2-propanol or d-limonene. [ 75 ]
Other healthcare environments such as nursing and age care homes have been rarely a subject of study, even though the elderly and vulnerable populations may spend considerable time in these indoor settings where they might be exposed to VOCs, derived from the common use of cleaning agents, sprays and fresheners. [ 78 ] [ 79 ] In one study, more than 200 chemicals were identified, of which 41 have adverse health effects, 37 of them being VOCs. The health effects include skin sensitization, reproductive and organ-specific toxicity, carcinogenicity , mutagenicity , and endocrine-disrupting properties. [ 78 ] Furthermore, in another study carried out in the same European country, it was found that there is a significant association between breathlessness in the elderly population and elevated exposure to VOCs such as toluene and o-xylene , unlike the remainder of the population. [ 80 ]
Workers in hospitality are also exposed to VOCs from a variety of sources including cleaning products (air fresheners, floor cleaners, disinfectants, etc.), building materials and furnishings, as well as fragrances. [ 81 ] One of the most common VOC found in hospitality settings are alkanes , which are a major ingredient in cleaning products (35%). [ 81 ] Other products present in hospitality that contain alkanes are laundry detergents, paints, and lubricants. [ 81 ] Housekeepers in particular may also be exposed to formaldehyde , [ 82 ] which is present in some fabrics used to make towels and bedding, however exposure decreases after several washes. [ 83 ] Some hotels still use bleach to clean, and this bleach can form chloroform and carbon tetrachloride . [ 84 ] Fragrances are often used in hotels and are composed of many different chemicals. [ 81 ]
There are many negative health outcomes associated with VOC exposure in hospitality. VOCs present in cleaning supplies can cause skin, eye, nose, and throat irritation, which can develop into dermatitis . [ 85 ] VOCs in cleaning supplies can also cause more serious conditions, such as respiratory diseases and cancer. [ 81 ] One study found that n-nonane and formaldehyde were the main drivers of eye and upper respiratory tract irritation while cancer risks were driven by chloroform and formaldehyde. [ 81 ] Some solvent-based products have also been shown to cause damage to the kidneys and reproductive organs. [ 85 ] One study showed that the star rating of the hotel may influence VOC exposure, as hotels with lower star ratings tend to have lower quality materials for the furnishings. [ 86 ] Additionally, due to a movement among higher-end hotels to be more environmentally friendly, there has been a shift to using less harsh cleaning agents. [ 86 ]
Another similar environment that exposes workers to VOCs are retail spaces. Studies have shown that retail spaces have the highest VOC concentrations compared to all other indoor spaces such as residences, offices, and vehicles. [ 87 ] [ 88 ] The concentration of VOCs present as well as the types depend on the type of store, but common sources of VOCs in retail spaces include motor vehicle exhaust, building materials, cleaning products, products, and fragrances. [ 89 ] One study found that VOC concentrations were higher in retail storage spaces compared to the sales areas, particularly formaldehyde. [ 90 ] In retail spaces, formaldehyde concentrations ranged from 8.0 to 19.4 μg/m 3 compared to 14.2 to 45.0 μg/m 3 in storage spaces. [ 90 ] Occupational exposure to VOCs also depends on the task. One study found that workers were exposed to peak total VOC concentrations when they were removing the plastic film off of new products. [ 90 ] This peak was 7 times higher than total VOC concentration peaks of all other tasks, contributing greatly to retail workers' exposure to VOCs despite being a relatively short task. [ 90 ]
One way that VOC concentrations can be kept minimal within retail and hospitality is by ensuring there is proper air ventilation. [ 91 ] Employers can ensure proper ventilation by placing furniture in a way that enhances air circulation, as well as checking that the HVAC (heating, ventilation, and air conditioning) system is working properly to remove pollutants from the air. [ 91 ] Workers can make sure that air vents are not blocked. [ 91 ]
Obtaining samples for analysis is challenging. VOCs, even when at dangerous levels, are dilute, so preconcentration is typically required. Many components of the atmosphere are mutually incompatible, e.g. ozone and organic compounds, peroxyacyl nitrates and many organic compounds. Furthermore, collection of VOCs by condensation in cold traps also accumulates a large amount of water, which generally must be removed selectively, depending on the analytical techniques to be employed. [ 32 ] Solid-phase microextraction (SPME) techniques are used to collect VOCs at low concentrations for analysis. [ 92 ] As applied to breath analysis, the following modalities are employed for sampling: gas sampling bags, syringes, evacuated steel and glass containers. [ 93 ]
In the U.S., standard methods have been established by the National Institute for Occupational Safety and Health (NIOSH) and another by U.S. OSHA. Each method uses a single component solvent; butanol and hexane cannot be sampled, however, on the same sample matrix using the NIOSH or OSHA method. [ 94 ]
VOCs are quantified and identified by two broad techniques. The major technique is gas chromatography (GC). GC instruments allow the separation of gaseous components. When coupled to a flame ionization detector (FID) GCs can detect hydrocarbons at the parts per trillion levels. Using electron capture detectors , GCs are also effective for organohalide such as chlorocarbons.
The second major technique associated with VOC analysis is mass spectrometry , which is usually coupled with GC, giving the hyphenated technique of GC-MS. [ 95 ]
Direct injection mass spectrometry techniques are frequently utilized for the rapid detection and accurate quantification of VOCs. [ 96 ] PTR-MS is among the methods that have been used most extensively for the on-line analysis of biogenic and anthropogenic VOCs. [ 97 ] PTR-MS instruments based on time-of-flight mass spectrometry have been reported to reach detection limits of 20 pptv after 100 ms and 750 ppqv after 1 min. measurement (signal integration) time. The mass resolution of these devices is between 7000 and 10,500 m/Δm, thus it is possible to separate most common isobaric VOCs and quantify them independently. [ 98 ]
The exhaled human breath contains a few thousand volatile organic compounds and is used in breath biopsy to serve as a VOC biomarker to test for diseases, [ 93 ] such as lung cancer . [ 99 ] One study has shown that "volatile organic compounds ... are mainly blood borne and therefore enable monitoring of different processes in the body." [ 100 ] And it appears that VOC compounds in the body "may be either produced by metabolic processes or inhaled/absorbed from exogenous sources" such as environmental tobacco smoke . [ 99 ] [ 101 ] Chemical fingerprinting and breath analysis of volatile organic compounds has also been demonstrated with chemical sensor arrays , which utilize pattern recognition for detection of component volatile organics in complex mixtures such as breath gas.
To achieve comparability of VOC measurements, reference standards traceable to SI units are required. For a number of VOCs gaseous reference standards are available from specialty gas suppliers or national metrology institutes , either in the form of cylinders or dynamic generation methods. However, for many VOCs, such as oxygenated VOCs, monoterpenes , or formaldehyde , no standards are available at the appropriate amount of fraction due to the chemical reactivity or adsorption of these molecules. Currently, several national metrology institutes are working on the lacking standard gas mixtures at trace level concentration, minimising adsorption processes, and improving the zero gas. [ 44 ] The final scopes are for the traceability and the long-term stability of the standard gases to be in accordance with the data quality objectives (DQO, maximum uncertainty of 20% in this case) required by the WMO / GAW program. [ 102 ]
|
https://en.wikipedia.org/wiki/Volatile_organic_compound
|
Volatile suspended solids ( VSS ) is an analytical parameter that represents the undissolved organic matter in a water sample. More technically, it is a water quality parameter obtained from the loss on ignition of total suspended solids . [ 1 ] The heating of sample generally takes place in an oven at a temperature of 550 °C [ 1 ] to 600 °C. It represents the amount of volatile matter present in the undissolved solid fraction of the measured solution. VSS is an important parameter in wastewater treatment and characterization.
This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Volatile_suspended_solids
|
In chemistry , volatility is a material quality which describes how readily a substance vaporizes . At a given temperature and pressure , a substance with high volatility is more likely to exist as a vapour , while a substance with low volatility is more likely to be a liquid or solid . Volatility can also describe the tendency of a vapor to condense into a liquid or solid; less volatile substances will more readily condense from a vapor than highly volatile ones. [ 1 ] Differences in volatility can be observed by comparing how fast substances within a group evaporate (or sublimate in the case of solids) when exposed to the atmosphere. A highly volatile substance such as rubbing alcohol ( isopropyl alcohol ) will quickly evaporate, while a substance with low volatility such as vegetable oil will remain condensed. [ 2 ] In general, solids are much less volatile than liquids, but there are some exceptions. Solids that sublimate (change directly from solid to vapor) such as dry ice (solid carbon dioxide) or iodine can vaporize at a similar rate as some liquids under standard conditions. [ 3 ]
Volatility itself has no defined numerical value, but it is often described using vapor pressures or boiling points (for liquids). High vapor pressures indicate a high volatility, while high boiling points indicate low volatility. Vapor pressures and boiling points are often presented in tables and charts that can be used to compare chemicals of interest. Volatility data is typically found through experimentation over a range of temperatures and pressures.
Vapor pressure is a measurement of how readily a condensed phase forms a vapor at a given temperature. A substance enclosed in a sealed vessel initially at vacuum (no air inside) will quickly fill any empty space with vapor. After the system reaches equilibrium and the rate of evaporation matches the rate of condensation, the vapor pressure can be measured. Increasing the temperature increases the amount of vapor that is formed and thus the vapor pressure. In a mixture, each substance contributes to the overall vapor pressure of the mixture, with more volatile compounds making a larger contribution.
Boiling point is the temperature at which the vapor pressure of a liquid is equal to the surrounding pressure, causing the liquid to rapidly evaporate, or boil. It is closely related to vapor pressure, but is dependent on pressure. The normal boiling point is the boiling point at atmospheric pressure , but it can also be reported at higher and lower pressures. [ 3 ]
An important factor influencing a substance's volatility is the strength of the interactions between its molecules. Attractive forces between molecules are what holds materials together, and materials with stronger intermolecular forces , such as most solids, are typically not very volatile. Ethanol and dimethyl ether , two chemicals with the same formula (C 2 H 6 O), have different volatilities due to the different interactions that occur between their molecules in the liquid phase: ethanol molecules are capable of hydrogen bonding while dimethyl ether molecules are not. [ 4 ] The result in an overall stronger attractive force between the ethanol molecules, making it the less volatile substance of the two.
In general, volatility tends to decrease with increasing molecular mass because larger molecules can participate in more intermolecular bonding, [ 5 ] although other factors such as structure and polarity play a significant role. The effect of molecular mass can be partially isolated by comparing chemicals of similar structure (i.e. esters , alkanes, etc.). For instance, linear alkanes exhibit decreasing volatility as the number of carbons in the chain increases.
Knowledge of volatility is often useful in the separation of components from a mixture. When a mixture of condensed substances contains multiple substances with different levels of volatility, its temperature and pressure can be manipulated such that the more volatile components change to a vapor while the less volatile substances remain in the liquid or solid phase. The newly formed vapor can then be discarded or condensed into a separate container. When the vapors are collected, this process is known as distillation . [ 6 ]
The process of petroleum refinement utilizes a technique known as fractional distillation , which allows several chemicals of varying volatility to be separated in a single step. Crude oil entering a refinery is composed of many useful chemicals that need to be separated. The crude oil flows into a distillation tower and is heated up, which allows the more volatile components such as butane and kerosene to vaporize. These vapors move up the tower and eventually come in contact with cold surfaces, which causes them to condense and be collected. The most volatile chemical condense at the top of the column while the least volatile chemicals to vaporize condense in the lowest portion. [ 1 ]
The difference in volatility between water and ethanol has long been used to produce concentrated alcoholic beverages (many of these are referred to as " liquors "). In order to increase the concentration of ethanol in the product, beverage makers would heat the initial alcohol mixture to a temperature where most of the ethanol vaporizes while most of the water remains liquid. The ethanol vapor is then collected and condensed in a separate container, resulting in a much more concentrated product. [ 7 ]
Volatility is an important consideration when crafting perfumes . Humans detect odors when aromatic vapors come in contact with receptors in the nose. Ingredients that vaporize quickly after being applied will produce fragrant vapors for a short time before the oils evaporate. Slow-evaporating ingredients can stay on the skin for weeks or even months, but may not produce enough vapors to produce a strong aroma. To prevent these problems, perfume designers carefully consider the volatility of essential oils and other ingredients in their perfumes. Appropriate evaporation rates are achieved by modifying the amount of highly volatile and non-volatile ingredients used. [ 8 ]
|
https://en.wikipedia.org/wiki/Volatility_(chemistry)
|
In finance , volatility (usually denoted by " σ ") is the degree of variation of a trading price series over time, usually measured by the standard deviation of logarithmic returns .
Historic volatility measures a time series of past market prices. Implied volatility looks forward in time, being derived from the market price of a market-traded derivative (in particular, an option).
Volatility as described here refers to the actual volatility , more specifically:
Now turning to implied volatility , we have:
For a financial instrument whose price follows a Gaussian random walk , or Wiener process , the width of the distribution increases as time increases. This is because there is an increasing probability that the instrument's price will be farther away from the initial price as time increases. However, rather than increase linearly, the volatility increases with the square-root of time as time increases, because some fluctuations are expected to cancel each other out, so the most likely deviation after twice the time will not be twice the distance from zero.
Since observed price changes do not follow Gaussian distributions, others such as the Lévy distribution are often used. [ 1 ] These can capture attributes such as " fat tails ". Volatility is a statistical measure of dispersion around the average of any random variable such as market parameters etc.
For any fund that evolves randomly with time, volatility is defined as the standard deviation of a sequence of random variables, each of which is the return of the fund over some corresponding sequence of (equally sized) times.
Thus, "annualized" volatility σ annually is the standard deviation of an instrument's yearly logarithmic returns . [ 2 ]
The generalized volatility σ T for time horizon T in years is expressed as:
Therefore, if the daily logarithmic returns of a stock have a standard deviation of σ daily and the time period of returns is P in trading days, the annualized volatility is
so
A common assumption is that P = 252 trading days in any given year. Then, if σ daily = 0.01, the annualized volatility is
The monthly volatility (i.e. T = 1 12 {\displaystyle T={\tfrac {1}{12}}} of a year) is
The formulas used above to convert returns or volatility measures from one time period to another assume a particular underlying model or process. These formulas are accurate extrapolations of a random walk , or Wiener process, whose steps have finite variance. However, more generally, for natural stochastic processes, the precise relationship between volatility measures for different time periods is more complicated. Some use the Lévy stability exponent α to extrapolate natural processes:
If α = 2 the Wiener process scaling relation is obtained, but some people believe α < 2 for financial activities such as stocks, indexes and so on. This was discovered by Benoît Mandelbrot , who looked at cotton prices and found that they followed a Lévy alpha-stable distribution with α = 1.7. (See New Scientist, 19 April 1997.)
Much research has been devoted to modelling and forecasting the volatility of financial returns, and yet few theoretical models explain how volatility comes to exist in the first place.
Roll (1984) shows that volatility is affected by market microstructure . [ 3 ] Glosten and Milgrom (1985) shows that at least one source of volatility can be explained by the liquidity provision process. When market makers infer the possibility of adverse selection , they adjust their trading ranges, which in turn increases the band of price oscillation. [ 4 ]
In September 2019, JPMorgan Chase determined the effect of US President Donald Trump 's tweets , and called it the Volfefe index combining volatility and the covfefe meme .
Volatility matters to investors for at least eight reasons, [ citation needed ] several of which are alternative statements of the same feature or are directly consequent on each other:
Volatility does not measure the direction of price changes, merely their dispersion. This is because when calculating standard deviation (or variance ), all differences are squared, so that negative and positive differences are combined into one quantity. Two instruments with different volatilities may have the same expected return, but the instrument with higher volatility will have larger swings in values over a given period of time.
For example, a lower volatility stock may have an expected (average) return of 7%, with annual volatility of 5%. Ignoring compounding effects, this would indicate returns from approximately negative 3% to positive 17% most of the time (19 times out of 20, or 95% via a two standard deviation rule). A higher volatility stock, with the same expected return of 7% but with annual volatility of 20%, would indicate returns from approximately negative 33% to positive 47% most of the time (19 times out of 20, or 95%). These estimates assume a normal distribution ; in reality stock price movements are found to be leptokurtotic (fat-tailed).
Although the Black-Scholes equation assumes predictable constant volatility, this is not observed in real markets. Amongst more realistic models are Emanuel Derman and Iraj Kani 's [ 5 ] and Bruno Dupire 's local volatility , Poisson process where volatility jumps to new levels with a predictable frequency, and the increasingly popular Heston model of stochastic volatility . [ 6 ]
It is common knowledge that many types of assets experience periods of high and low volatility. That is, during some periods, prices go up and down quickly, while during other times they barely move at all. [ 7 ] In foreign exchange market , price changes are seasonally heteroskedastic with periods of one day and one week. [ 8 ] [ 9 ]
Periods when prices fall quickly (a crash ) are often followed by prices going down even more, or going up by an unusual amount. Also, a time when prices rise quickly (a possible bubble ) may often be followed by prices going up even more, or going down by an unusual amount.
Most typically, extreme movements do not appear 'out of nowhere'; they are presaged by larger movements than usual or by known uncertainty in specific future events. This is termed autoregressive conditional heteroskedasticity . Whether such large movements have the same direction, or the opposite, is more difficult to say. And an increase in volatility does not always presage a further increase—the volatility may simply go back down again.
Measures of volatility depend not only on the period over which it is measured, but also on the selected time resolution, as the information flow between short-term and long-term traders is asymmetric. [ clarification needed ] As a result, volatility measured with high resolution contains information that is not covered by low resolution volatility and vice versa. [ 10 ]
The risk parity weighted volatility of the three assets Gold, Treasury bonds and Nasdaq acting as proxy for the Marketportfolio [ clarification needed ] seems to have a low point at 4% after turning upwards for the 8th time since 1974 at this reading in the summer of 2014. [ clarification needed ] [ citation needed ]
Some authors point out that realized volatility and implied volatility are backward and forward looking measures, and do not reflect current volatility. To address that issue an alternative, ensemble measures of volatility were suggested. One of the measures is defined as the standard deviation of ensemble returns instead of time series of returns. [ 11 ] Another considers the regular sequence of directional-changes as the proxy for the instantaneous volatility. [ 12 ]
One method of measuring Volatility, often used by quant option trading firms, divides up volatility into two components. Clean volatility - the amount of volatility caused standard events like daily transactions and general noise - and dirty vol, the amount caused by specific events like earnings or policy announcements. [ 13 ] For instance, a company like Microsoft would have clean volatility caused by people buying and selling on a daily basis but dirty (or event vol) events like quarterly earnings or a possibly anti-trust announcement.
Breaking down volatility into two components is useful in order to accurately price how much an option is worth, especially when identifying what events may contribute to a swing. The job of fundamental analysts at market makers and option trading boutique firms typically entails trying to assign numeric values to these numbers.
Using a simplification of the above formula it is possible to estimate annualized volatility based solely on approximate observations. Suppose you notice that a market price index, which has a current value near 10,000, has moved about 100 points a day, on average, for many days. This would constitute a 1% daily movement, up or down.
To annualize this, you can use the "rule of 16", that is, multiply by 16 to get 16% as the annual volatility. The rationale for this is that 16 is the square root of 256, which is approximately the number of trading days in a year (252). This also uses the fact that the standard deviation of the sum of n independent variables (with equal standard deviations) is √n times the standard deviation of the individual variables.
However importantly this does not capture (or in some cases may give excessive weight to) occasional large movements in market price which occur less frequently than once a year.
The average magnitude of the observations is merely an approximation of the standard deviation of the market index. Assuming that the market index daily changes are normally distributed with mean zero and standard deviation σ , the expected value of the magnitude of the observations is √(2/ π ) σ = 0.798 σ . The net effect is that this crude approach underestimates the true volatility by about 20%.
Consider the Taylor series :
Taking only the first two terms one has:
Volatility thus mathematically represents a drag on the CAGR (formalized as the " volatility tax "). Realistically, most financial assets have negative skewness and leptokurtosis, so this formula tends to be over-optimistic. Some people use the formula:
for a rough estimate, where k is an empirical factor (typically five to ten). [ citation needed ]
Despite the sophisticated composition of most volatility forecasting models, critics claim that their predictive power is similar to that of plain-vanilla measures, such as simple past volatility [ 14 ] [ 15 ] especially out-of-sample, where different data are used to estimate the models and to test them. [ 16 ] Other works have agreed, but claim critics failed to correctly implement the more complicated models. [ 17 ] Some practitioners and portfolio managers seem to completely ignore or dismiss volatility forecasting models. For example, Nassim Taleb famously titled one of his Journal of Portfolio Management papers "We Don't Quite Know What We are Talking About When We Talk About Volatility". [ 18 ] In a similar note, Emanuel Derman expressed his disillusion with the enormous supply of empirical models unsupported by theory. [ 19 ] He argues that, while "theories are attempts to uncover the hidden principles underpinning the world around us, as Albert Einstein did with his theory of relativity", we should remember that "models are metaphors – analogies that describe one thing relative to another".
|
https://en.wikipedia.org/wiki/Volatility_(finance)
|
The volatilome (sometimes termed volatolome [ 1 ] or volatome [ 2 ] [ 3 ] [ 4 ] ) contains all of the volatile metabolites as well as other volatile organic and inorganic compounds that originate from an organism , [ 5 ] [ 6 ] [ 7 ] super-organism , or ecosystem . The atmosphere of a living planet could be regarded as its volatilome. While all volatile metabolites in the volatilome can be thought of as a subset of the metabolome , the volatilome also contains exogenously derived compounds that do not derive from metabolic processes (e.g. environmental contaminants), therefore the volatilome can be regarded as a distinct entity from the metabolome. The volatilome is a component of the 'aura' of molecules and microbes (the 'microbial cloud' [ 8 ] ) that surrounds all organisms.
All volatile metabolites detectable by the human nose are termed an ' odour profile'. The association of altered odour profiles with disease states has long been documented in both eastern and western medicine, and recent advances in robotic sample introduction have increased interest in the volatilome as a source for biomarkers that can be used for non-invasive screening for disease. [ 9 ] [ 10 ] Volatile profiles can be collected via active or passive sampling and analysis is predominantly undertaken using gas chromatography–mass spectrometry , with a variety of direct or indirect sample introduction techniques.
|
https://en.wikipedia.org/wiki/Volatilome
|
Volatolomics is a branch of chemistry that studies volatile organic compounds (VOCs) emitted by a biological system , under specific experimental conditions.
According to the Oxford English Dictionary, the suffix ‘omics’ refers to ‘the totality of some sort’. In biology, ‘omics’ techniques are used for the high-throughput analysis of DNA sequences and epigenetic modifications ( genomics ), mRNA and miRNA transcripts ( transcriptomics ), expressed proteins ( proteomics ), as well as synthesised metabolites ( metabolomics ) in a biological system ( cell , tissue , organism , etc.) under a given set of experimental conditions.
Due to the high number of variables that are measured simultaneously, these techniques provide large and complex datasets that require adapted tools for data analysis and interpretation. [ 1 ] [ 2 ] [ 3 ]
The European Council directive 1999/13/EC defines volatile organic compounds (VOCs) as “ any organic compound having at 293.15 K a vapor pressure of 0.01 kPa or more, or having a corresponding volatility under the particular conditions of use ”.
In our daily life, these molecules are notably responsible for the flavor of food products, as well as of the fragrance of essential oils used in the cosmetics industry . [ 4 ]
In nature, these molecules are produced by bacteria and fungi . [ 5 ]
They are also synthesized by plants (flowers, fruits, leaves and roots) [ 6 ] [ 7 ] [ 8 ] [ 9 ] and animals (humans, [ 10 ] insects, [ 11 ] etc.).
The profiling of VOCs emitted by living organisms takes an increasing importance in various scientific domains like in medicine , in food sciences or in chemical ecology .
For instance, in medicine, non-invasive diagnosis techniques of cancer based on the profiling of VOCs from the exhaled breath have been developed. [ 12 ] [ 13 ] To this end, a variety of novel sensing approaches and nanomaterial based sensors are being used in volatolomics research. [ 14 ] [ 15 ]
In the field of chemical ecology, gas chromatography coupled to mass spectrometry (GC-MS) is often used to characterize the volatile semiochemicals involved in the biotic interactions taking place aboveground [ 6 ] [ 16 ] [ 17 ] and belowground [ 9 ] [ 18 ] [ 19 ] [ 20 ] between plants, insects and phytopathogens .
|
https://en.wikipedia.org/wiki/Volatolomics
|
Volcanic gases are gases given off by active (or, at times, by dormant) volcanoes . These include gases trapped in cavities ( vesicles ) in volcanic rocks , dissolved or dissociated gases in magma and lava , or gases emanating from lava, from volcanic craters or vents. Volcanic gases can also be emitted through groundwater heated by volcanic action .
The sources of volcanic gases on Earth include:
Substances that may become gaseous or give off gases when heated are termed volatile substances.
The principal components of volcanic gases are water vapor (H 2 O), carbon dioxide (CO 2 ), sulfur either as sulfur dioxide (SO 2 ) (high-temperature volcanic gases) or hydrogen sulfide (H 2 S) (low-temperature volcanic gases), nitrogen , argon , helium , neon , methane , carbon monoxide and hydrogen . Other compounds detected in volcanic gases are oxygen (meteoric) [ clarification needed ] , hydrogen chloride , hydrogen fluoride , hydrogen bromide , sulfur hexafluoride , carbonyl sulfide , and organic compounds . Exotic trace compounds include mercury , [ 1 ] halocarbons (including CFCs ), [ 2 ] and halogen oxide radicals . [ 3 ]
The abundance of gases varies considerably from volcano to volcano, with volcanic activity and with tectonic setting. Water vapour is consistently the most abundant volcanic gas, normally comprising more than 60% of total emissions. Carbon dioxide typically accounts for 10 to 40% of emissions. [ 4 ]
Volcanoes located at convergent plate boundaries emit more water vapor and chlorine than volcanoes at hot spots or divergent plate boundaries. This is caused by the addition of seawater into magmas formed at subduction zones . Convergent plate boundary volcanoes also have higher H 2 O/H 2 , H 2 O/CO 2 , CO 2 /He and N 2 /He ratios than hot spot or divergent plate boundary volcanoes. [ 4 ]
Magma contains dissolved volatile components , as described above. The solubilities of the different volatile constituents are dependent on pressure, temperature and the composition of the magma . As magma ascends towards the surface, the ambient pressure decreases, which decreases the solubility of the dissolved volatiles. Once the solubility decreases below the volatile concentration, the volatiles will tend to come out of solution within the magma (exsolve) and form a separate gas phase (the magma is super-saturated in volatiles).
The gas will initially be distributed throughout the magma as small bubbles, that cannot rise quickly through the magma. As the magma ascends the bubbles grow through a combination of expansion through decompression and growth as the solubility of volatiles in the magma decreases further causing more gas to exsolve. Depending on the viscosity of the magma, the bubbles may start to rise through the magma and coalesce, or they remain relatively fixed in place until they begin to connect and form a continuously connected network. In the former case, the bubbles may rise through the magma and accumulate at a vertical surface, e.g. the 'roof' of a magma chamber. In volcanoes with an open path to the surface, e.g. Stromboli in Italy , the bubbles may reach the surface and as they pop small explosions occur. In the latter case, the gas can flow rapidly through the continuous permeable network towards the surface. This mechanism has been used to explain activity at Santiaguito, Santa Maria volcano , Guatemala [ 5 ] and Soufrière Hills Volcano, Montserrat . [ 6 ] If the gas cannot escape fast enough from the magma, it will fragment the magma into small particles of ash. The fluidised ash has a much lower resistance to motion than the viscous magma, so accelerates, causing further expansion of the gases and acceleration of the mixture. This sequence of events drives explosive volcanism. Whether gas can escape gently (passive eruptions) or not (explosive eruptions) is determined by the total volatile contents of the initial magma and the viscosity of the magma, which is controlled by its composition.
The term 'closed system' degassing refers to the case where gas and its parent magma ascend together and in equilibrium with each other. The composition of the emitted gas is in equilibrium with the composition of the magma at the pressure, temperature where the gas leaves the system. In 'open system' degassing, the gas leaves its parent magma and rises up through the overlying magma without remaining in equilibrium with that magma. The gas released at the surface has a composition that is a mass-flow average of the magma exsolved at various depths and is not representative of the magma conditions at any one depth.
Molten rock (either magma or lava) near the atmosphere releases high-temperature volcanic gas (>400 °C).
In explosive volcanic eruptions , the sudden release of gases from magma may cause rapid movements of the molten rock. When the magma encounters water, seawater, lake water or groundwater, it can be rapidly fragmented. The rapid expansion of gases is the driving mechanism of most explosive volcanic eruptions. However, a significant portion of volcanic gas release occurs during quasi-continuous quiescent phases of active volcanism.
As magmatic gas travelling upward encounters meteoric water in an aquifer , steam is produced. Latent magmatic heat can also cause meteoric waters to ascend as a vapour phase. Extended fluid-rock interaction of this hot mixture can leach constituents out of the cooling magmatic rock and also the country rock , causing volume changes and phase transitions, reactions and thus an increase in ionic strength of the upward percolating fluid. This process also decreases the fluid's pH . Cooling can cause phase separation and mineral deposition, accompanied by a shift toward more reducing conditions. At the surface expression of such hydrothermal systems, low-temperature volcanic gases (<400 °C) are either emanating as steam-gas mixtures or in dissolved form in hot springs . At the ocean floor, such hot supersaturated hydrothermal fluids form gigantic chimney structures called black smokers , at the point of emission into the cold seawater .
Over geological time, this process of hydrothermal leaching, alteration, and/or redeposition of minerals in the country rock is an effective process of concentration that generates certain types of economically valuable ore deposits.
The gas release can occur by advection through fractures, or via diffuse degassing through large areas of permeable ground as diffuse degassing structures (DDS). At sites of advective gas loss, precipitation of sulfur and rare minerals forms sulfur deposits and small sulfur chimneys, called fumaroles . [ 7 ] Very low-temperature (below 100 °C) fumarolic structures are also known as solfataras . Sites of cold degassing of predominantly carbon dioxide are called mofettes . Hot springs on volcanoes often show a measurable amount of magmatic gas in dissolved form.
Present day global emissions of volcanic gases to the atmosphere can be classified as eruptive or non-eruptive. Although all volcanic gas species are emitted to the atmosphere, the emissions of CO 2 (a greenhouse gas ) and SO 2 have received the most study.
It has long been recognized that eruptions contribute much lower total SO 2 emissions than passive degassing does. [ 8 ] [ 9 ] Fischer et al (2019) estimated that, from 2005 to 2015, SO 2 emissions during eruptions were 2.6 teragrams (Tg or 10 12 g or 0.907 gigatons Gt) per year [ 10 ] and during non-eruptive periods of passive degassing were 23.2 ± 2Tg per year. [ 10 ] During the same time interval, CO 2 emissions from volcanoes during eruptions were estimated to be 1.8 ± 0.9 Tg per year [ 10 ] and during non-eruptive activity were 51.3 ± 5.7 Tg per year. [ 10 ] Therefore, CO 2 emissions during volcanic eruptions are less than 10% of CO 2 emissions released during non-eruptive volcanic activity.
The 15 June 1991 eruption of Mount Pinatubo ( VEI 6) in the Philippines released a total of 18 ± 4 Tg of SO 2 . [ 11 ] Such large VEI 6 eruptions are rare and only occur once every 50 – 100 years. The 2010 eruptions of Eyjafjallajökull (VEI 4) in Iceland emitted a total of 5.1 Tg CO 2 . [ 12 ] VEI 4 eruptions occur about once per year.
For comparison, Le Quéré, C. et al estimates that human burning of fossil fuels and production of cement processed 9.3 Gt carbon per year from 2006 through 2015, [ 13 ] creating up to 34.1 Gt CO2 annually.
Some recent volcanic CO 2 emission estimates are higher than Fischer et al (2019). [ 10 ] The estimates of Burton et al. (2013) of 540 Tg CO 2 /year [ 14 ] and of Werner et al. (2019) of 220 - 300 Tg CO 2 /year [ 12 ] take into account diffuse CO 2 emissions from volcanic regions.
Volcanic gases were collected and analysed as long ago as 1790 by Scipione Breislak in Italy. [ 15 ] The composition of volcanic gases is dependent on the movement of magma within the volcano. Therefore, sudden changes in gas composition often presage a change in volcanic activity. Accordingly, a large part of hazard monitoring of volcanoes involves regular measurement of gaseous emissions. For example, an increase in the CO 2 content of gases at Stromboli has been ascribed to injection of fresh volatile-rich magma at depth within the system. [ 16 ]
Volcanic gases can be sensed (measured in-situ) or sampled for further analysis. Volcanic gas sensing can be:
Sulphur dioxide (SO 2 ) absorbs strongly in the ultraviolet wavelengths and has low background concentrations in the atmosphere. These characteristics make sulphur dioxide a good target for volcanic gas monitoring. It can be detected by satellite-based instruments, which allow for global monitoring, and by ground-based instruments such as DOAS. DOAS arrays are placed near some well-monitored volcanoes and used to estimate the flux of SO 2 emitted. The Multi-Component Gas Analyzer System (Multi-GAS) is also used to remotely measure CO 2 , SO 2 and H 2 S. [ 17 ] The fluxes of other gases are usually estimated by measuring the ratios of different gases within the volcanic plume, e.g. by FTIR, electrochemical sensors at the volcano crater rim, or direct sampling, and multiplying the ratio of the gas of interest to SO 2 by the SO 2 flux.
Direct sampling of volcanic gas sampling is often done by a method involving an evacuated flask with caustic solution, first used by Robert W. Bunsen (1811-1899) and later refined by the German chemist Werner F. Giggenbach (1937-1997), dubbed Giggenbach-bottle . Other methods include collection in evacuated empty containers, in flow-through glass tubes, in gas wash bottles (cryogenic scrubbers), on impregnated filter packs and on solid adsorbent tubes.
Analytical techniques for gas samples comprise gas chromatography with thermal conductivity detection (TCD), flame ionization detection (FID) and mass spectrometry (GC-MS) for gases, and various wet chemical techniques for dissolved species (e.g., acidimetric titration for dissolved CO 2 , and ion chromatography for sulfate , chloride , fluoride ). The trace metal, trace organic and isotopic composition is usually determined by different mass spectrometric methods.
Certain constituents of volcanic gases may show very early signs of changing conditions at depth, making them a powerful tool to predict imminent unrest. Used in conjunction with monitoring data on seismicity and deformation , correlative monitoring gains great efficiency. Volcanic gas monitoring is a standard tool of any volcano observatory . Unfortunately, the most precise compositional data still require dangerous field sampling campaigns. However, remote sensing techniques have advanced tremendously through the 1990s. The Deep Earth Carbon Degassing Project is employing Multi-GAS remote sensing to monitor 9 volcanoes on a continuous basis.
Volcanic gases were directly responsible for approximately 3% of all volcano-related deaths of humans between 1900 and 1986. [ 4 ] Some volcanic gases kill by acidic corrosion ; others kill by asphyxiation . Some volcanic gases including sulfur dioxide, hydrogen chloride, hydrogen sulfide and hydrogen fluoride react with other atmospheric particles to form aerosols . [ 4 ]
|
https://en.wikipedia.org/wiki/Volcanic_gas
|
In statistics, a volcano plot is a type of scatter-plot that is used to quickly identify changes in large data sets composed of replicate data. [ 1 ] [ 2 ] It plots significance versus fold-change on the y and x axes, respectively. These plots are increasingly common in omic experiments such as genomics , proteomics , and metabolomics where one often has a list of many thousands of replicate data points between two conditions and one wishes to quickly identify the most meaningful changes. A volcano plot combines a measure of statistical significance from a statistical test (e.g., a p value from an ANOVA model) with the magnitude of the change, enabling quick visual identification of those data-points (genes, etc.) that display large magnitude changes that are also statistically significant .
A volcano plot is a sophisticated data visualization tool used in statistical and genomic analyses to illustrate the relationship between the magnitude of change and statistical significance. It is constructed by plotting the negative logarithm (base 10) of the p-value on the y-axis, ensuring that data points with lower p-values—indicative of higher statistical significance—are positioned toward the top of the plot. The x-axis represents the logarithm of the fold change between two conditions, allowing for a symmetric representation of both upregulated and downregulated changes relative to the center. This transformation ensures that equivalent deviations in either direction are equidistant from the origin, facilitating intuitive interpretation.
The plot inherently highlights two critical regions of interest: data points that reside in the upper extremes of the graph while being significantly displaced to the left or right. These points correspond to variables that exhibit both substantial fold changes (magnitude of effect) and exceptional statistical significance, making them prime candidates for further investigation in differential analyses. The volcano plot, therefore, serves as a powerful means of identifying key biomarkers, differentially expressed genes, or other significant entities within complex datasets.
Additional information can be added by coloring the points according to a third dimension of data (such as signal intensity), but this is not uniformly employed. Volcano plots are also used to graphically display a significance analysis of microarrays (SAM) gene selection criterion, an example of regularization . [ 3 ]
The concept of volcano plot can be generalized to other applications, where the x axis is related to a measure of
the strength of a statistical signal, and y axis is related to a measure of the statistical significance of the signal.
For example, in a genetic association case-control study, such as Genome-wide association study ,
a point in a volcano plot represents a single-nucleotide polymorphism .
Its x value can be the logarithm of the odds ratio and its y value can be -log 10 of the p value from a Chi-square test or a Chi-square test statistic . [ 4 ]
Volcano plots show a characteristic upwards two arm shape because the x axis, i.e. the underlying log 2 -fold changes, are generally normal distributed whereas the y axis, the log 10 -p values, tend toward greater significance for fold-changes that deviate more strongly from zero.
The density of the normal distribution takes the form
So the ln {\displaystyle \ln } of that is
and the negative ln {\displaystyle \ln } is
which is a parabola whose arms reach upwards
on the right and left sides.
The upper bound of the data is one parabola
and the lower bound is another parabola.
|
https://en.wikipedia.org/wiki/Volcano_plot_(statistics)
|
CORDIC ( coordinate rotation digital computer ), Volder's algorithm , Digit-by-digit method , Circular CORDIC ( Jack E. Volder ), [ 1 ] [ 2 ] Linear CORDIC , Hyperbolic CORDIC (John Stephen Walther), [ 3 ] [ 4 ] and Generalized Hyperbolic CORDIC ( GH CORDIC ) (Yuanyong Luo et al.), [ 5 ] [ 6 ] is a simple and efficient algorithm to calculate trigonometric functions , hyperbolic functions , square roots , multiplications , divisions , and exponentials and logarithms with arbitrary base, typically converging with one digit (or bit) per iteration. CORDIC is therefore also an example of digit-by-digit algorithms . CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available (e.g. in simple microcontrollers and field-programmable gate arrays or FPGAs), as the only operations they require are additions , subtractions , bitshift and lookup tables . As such, they all belong to the class of shift-and-add algorithms . In computer science, CORDIC is often used to implement floating-point arithmetic when the target platform lacks hardware multiply for cost or space reasons.
Similar mathematical techniques were published by Henry Briggs as early as 1624 [ 7 ] [ 8 ] and Robert Flower in 1771, [ 9 ] but CORDIC is better optimized for low-complexity finite-state CPUs.
CORDIC was conceived in 1956 [ 10 ] [ 11 ] by Jack E. Volder at the aeroelectronics department of Convair out of necessity to replace the analog resolver in the B-58 bomber 's navigation computer with a more accurate and faster real-time digital solution. [ 11 ] Therefore, CORDIC is sometimes referred to as a digital resolver . [ 12 ] [ 13 ]
In his research Volder was inspired by a formula in the 1946 edition of the CRC Handbook of Chemistry and Physics : [ 11 ]
where φ {\displaystyle \varphi } is such that tan ( φ ) = 2 − n {\displaystyle \tan(\varphi )=2^{-n}} , and K n := 1 + 2 − 2 n {\displaystyle K_{n}:={\sqrt {1+2^{-2n}}}} .
His research led to an internal technical report proposing the CORDIC algorithm to solve sine and cosine functions and a prototypical computer implementing it. [ 10 ] [ 11 ] The report also discussed the possibility to compute hyperbolic coordinate rotation , logarithms and exponential functions with modified CORDIC algorithms. [ 10 ] [ 11 ] Utilizing CORDIC for multiplication and division was also conceived at this time. [ 11 ] Based on the CORDIC principle, Dan H. Daggett, a colleague of Volder at Convair, developed conversion algorithms between binary and binary-coded decimal (BCD). [ 11 ] [ 14 ]
In 1958, Convair finally started to build a demonstration system to solve radar fix –taking problems named CORDIC I , completed in 1960 without Volder, who had left the company already. [ 1 ] [ 11 ] More universal CORDIC II models A (stationary) and B (airborne) were built and tested by Daggett and Harry Schuss in 1962. [ 11 ] [ 15 ]
Volder's CORDIC algorithm was first described in public in 1959, [ 1 ] [ 2 ] [ 11 ] [ 13 ] [ 16 ] which caused it to be incorporated into navigation computers by companies including Martin-Orlando , Computer Control , Litton , Kearfott , Lear-Siegler , Sperry , Raytheon , and Collins Radio . [ 11 ]
Volder teamed up with Malcolm McMillan to build Athena , a fixed-point desktop calculator utilizing his binary CORDIC algorithm. [ 17 ] The design was introduced to Hewlett-Packard in June 1965, but not accepted. [ 17 ] Still, McMillan introduced David S. Cochran (HP) to Volder's algorithm and when Cochran later met Volder he referred him to a similar approach John E. Meggitt (IBM [ 18 ] ) had proposed as pseudo-multiplication and pseudo-division in 1961. [ 18 ] [ 19 ] Meggitt's method also suggested the use of base 10 [ 18 ] rather than base 2 , as used by Volder's CORDIC so far. These efforts led to the ROMable logic implementation of a decimal CORDIC prototype machine inside of Hewlett-Packard in 1966, [ 20 ] [ 19 ] built by and conceptually derived from Thomas E. Osborne 's prototypical Green Machine , a four-function, floating-point desktop calculator he had completed in DTL logic [ 17 ] in December 1964. [ 21 ] This project resulted in the public demonstration of Hewlett-Packard's first desktop calculator with scientific functions, the HP 9100A in March 1968, with series production starting later that year. [ 17 ] [ 21 ] [ 22 ] [ 23 ]
When Wang Laboratories found that the HP 9100A used an approach similar to the factor combining method in their earlier LOCI-1 [ 24 ] (September 1964) and LOCI-2 (January 1965) [ 25 ] [ 26 ] Logarithmic Computing Instrument desktop calculators, [ 27 ] they unsuccessfully accused Hewlett-Packard of infringement of one of An Wang 's patents in 1968. [ 19 ] [ 28 ] [ 29 ] [ 30 ]
John Stephen Walther at Hewlett-Packard generalized the algorithm into the Unified CORDIC algorithm in 1971, allowing it to calculate hyperbolic functions , natural exponentials , natural logarithms , multiplications , divisions , and square roots . [ 31 ] [ 3 ] [ 4 ] [ 32 ] The CORDIC subroutines for trigonometric and hyperbolic functions could share most of their code. [ 28 ] This development resulted in the first scientific handheld calculator , the HP-35 in 1972. [ 28 ] [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] Based on hyperbolic CORDIC, Yuanyong Luo et al. further proposed a Generalized Hyperbolic CORDIC (GH CORDIC) to directly compute logarithms and exponentials with an arbitrary fixed base in 2019. [ 5 ] [ 6 ] [ 38 ] [ 39 ] [ 40 ] Theoretically, Hyperbolic CORDIC is a special case of GH CORDIC. [ 5 ]
Originally, CORDIC was implemented only using the binary numeral system and despite Meggitt suggesting the use of the decimal system for his pseudo-multiplication approach, decimal CORDIC continued to remain mostly unheard of for several more years, so that Hermann Schmid and Anthony Bogacki still suggested it as a novelty as late as 1973 [ 16 ] [ 13 ] [ 41 ] [ 42 ] [ 43 ] and it was found only later that Hewlett-Packard had implemented it in 1966 already. [ 11 ] [ 13 ] [ 20 ] [ 28 ]
Decimal CORDIC became widely used in pocket calculators , [ 13 ] most of which operate in binary-coded decimal (BCD) rather than binary. This change in the input and output format did not alter CORDIC's core calculation algorithms. CORDIC is particularly well-suited for handheld calculators, in which low cost – and thus low chip gate count – is much more important than speed.
CORDIC has been implemented in the ARM-based STM32G4 , Intel 8087 , [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] 80287 , [ 47 ] [ 48 ] 80387 [ 47 ] [ 48 ] up to the 80486 [ 43 ] coprocessor series as well as in the Motorola 68881 [ 43 ] [ 44 ] and 68882 for some kinds of floating-point instructions, mainly as a way to reduce the gate counts (and complexity) of the FPU sub-system.
CORDIC uses simple shift-add operations for several computing tasks such as the calculation of trigonometric, hyperbolic and logarithmic functions, real and complex multiplications, division, square-root calculation, solution of linear systems, eigenvalue estimation, singular value decomposition , QR factorization and many others. As a consequence, CORDIC has been used for applications in diverse areas such as signal and image processing , communication systems , robotics and 3D graphics apart from general scientific and technical computation. [ 49 ] [ 50 ]
The algorithm was used in the navigational system of the Apollo program 's Lunar Roving Vehicle to compute bearing and range, or distance from the Lunar module . [ 51 ] [ 52 ] CORDIC was used to implement the Intel 8087 math coprocessor in 1980, avoiding the need to implement hardware multiplication. [ 53 ]
CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller), or when the number of gates required to implement the functions it supports should be minimized (e.g., in an FPGA or ASIC ).
In fact, CORDIC is a standard drop-in IP in FPGA development applications such as Vivado for Xilinx, while a power series implementation is not due to the specificity of such an IP, i.e. CORDIC can compute many different functions (general purpose) while a hardware multiplier configured to execute power series implementations can only compute the function it was designed for.
On the other hand, when a hardware multiplier is available ( e.g. , in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC. In recent years, the CORDIC algorithm has been used extensively for various biomedical applications, especially in FPGA implementations. [ citation needed ]
The STM32G4 , STM32U5 and STM32H5 series and certain STM32H7 series of MCUs implement a CORDIC module to accelerate computations in various mixed signal applications such as graphics for human-machine interface and field oriented control of motors. While not as fast as a power series approximation, CORDIC is indeed faster than interpolating table based implementations such as the ones provided by the ARM CMSIS and C standard libraries. [ 54 ] Though the results may be slightly less accurate as the CORDIC modules provided only achieve 20 bits of precision in the result. For example, most of the performance difference compared to the ARM implementation is due to the overhead of the interpolation algorithm, which achieves full floating point precision (24 bits) and can likely achieve relative error to that precision. [ 55 ] Another benefit is that the CORDIC module is a coprocessor and can be run in parallel with other CPU tasks.
The issue with using Taylor series is that while they do provide small absolute error, they do not exhibit well behaved relative error. [ 56 ] Other means of polynomial approximation, such as minimax optimization, may be used to control both kinds of error.
Many older systems with integer-only CPUs have implemented CORDIC to varying extents as part of their IEEE floating-point libraries. As most modern general-purpose CPUs have floating-point registers with common operations such as add, subtract, multiply, divide, sine, cosine, square root, log 10 , natural log, the need to implement CORDIC in them with software is nearly non-existent. Only microcontroller or special safety and time-constrained software applications would need to consider using CORDIC.
CORDIC can be used to calculate a number of different functions. This explanation shows how to use CORDIC in rotation mode to calculate the sine and cosine of an angle, assuming that the desired angle is given in radians and represented in a fixed-point format. To determine the sine or cosine for an angle β {\displaystyle \beta } , the y or x coordinate of a point on the unit circle corresponding to the desired angle must be found. Using CORDIC, one would start with the vector v 0 {\displaystyle v_{0}} :
In the first iteration, this vector is rotated 45° counterclockwise to get the vector v 1 {\displaystyle v_{1}} . Successive iterations rotate the vector in one or the other direction by size-decreasing steps, until the desired angle has been achieved. Each step angle is γ i = arctan ( 2 − i ) {\displaystyle \gamma _{i}=\arctan {(2^{-i})}} for i = 0 , 1 , 2 , … {\displaystyle i=0,1,2,\dots } .
More formally, every iteration calculates a rotation, which is performed by multiplying the vector v i {\displaystyle v_{i}} with the rotation matrix R i {\displaystyle R_{i}} :
The rotation matrix is given by
Using the trigonometric identity :
the cosine factor can be taken out to give:
The expression for the rotated vector v i + 1 = R i v i {\displaystyle v_{i+1}=R_{i}v_{i}} then becomes:
where x i {\displaystyle x_{i}} and y i {\displaystyle y_{i}} are the components of v i {\displaystyle v_{i}} . Setting the angle γ i {\displaystyle \gamma _{i}} for each iteration such that tan ( γ i ) = ± 2 − i {\displaystyle \tan(\gamma _{i})=\pm 2^{-i}} still yields a series that converges to every possible output value. The multiplication with the tangent can therefore be replaced by a division by a power of two, which is efficiently done in digital computer hardware using a bit shift . The expression then becomes:
and σ i {\displaystyle \sigma _{i}} is used to determine the direction of the rotation: if the angle γ i {\displaystyle \gamma _{i}} is positive, then σ i {\displaystyle \sigma _{i}} is +1, otherwise it is −1.
The following trigonometric identity can be used to replace the cosine:
giving this multiplier for each iteration:
The K i {\displaystyle K_{i}} factors can then be taken out of the iterative process and applied all at once afterwards with a scaling factor K ( n ) {\displaystyle K(n)} :
which is calculated in advance and stored in a table or as a single constant, if the number of iterations is fixed. This correction could also be made in advance, by scaling v 0 {\displaystyle v_{0}} and hence saving a multiplication. Additionally, it can be noted that [ 43 ]
to allow further reduction of the algorithm's complexity. Some applications may avoid correcting for K {\displaystyle K} altogether, resulting in a processing gain A {\displaystyle A} : [ 57 ]
After a sufficient number of iterations, the vector's angle will be close to the wanted angle β {\displaystyle \beta } . For most ordinary purposes, 40 iterations ( n = 40) are sufficient to obtain the correct result to the 10th decimal place.
The only task left is to determine whether the rotation should be clockwise or counterclockwise at each iteration (choosing the value of σ {\displaystyle \sigma } ). This is done by keeping track of how much the angle was rotated at each iteration and subtracting that from the wanted angle; then in order to get closer to the wanted angle β {\displaystyle \beta } , if β n + 1 {\displaystyle \beta _{n+1}} is positive, the rotation is clockwise, otherwise it is negative and the rotation is counterclockwise:
The values of γ n {\displaystyle \gamma _{n}} must also be precomputed and stored. For small angles it can be approximated with arctan ( γ n ) ≈ γ n {\displaystyle \arctan(\gamma _{n})\approx \gamma _{n}} to reduce the table size.
As can be seen in the illustration above, the sine of the angle β {\displaystyle \beta } is the y coordinate of the final vector v n , {\displaystyle v_{n},} while the x coordinate is the cosine value.
The rotation-mode algorithm described above can rotate any vector (not only a unit vector aligned along the x axis) by an angle between −90° and +90°. Decisions on the direction of the rotation depend on β i {\displaystyle \beta _{i}} being positive or negative.
The vectoring-mode of operation requires a slight modification of the algorithm. It starts with a vector whose x coordinate is positive whereas the y coordinate is arbitrary. Successive rotations have the goal of rotating the vector to the x axis (and therefore reducing the y coordinate to zero). At each step, the value of y determines the direction of the rotation. The final value of β i {\displaystyle \beta _{i}} contains the total angle of rotation. The final value of x will be the magnitude of the original vector scaled by K . So, an obvious use of the vectoring mode is the transformation from rectangular to polar coordinates.
In Java the Math class has a scalb(double x,int scale) method to perform such a shift, [ 58 ] C has the ldexp function, [ 59 ] and the x86 class of processors have the fscale floating point operation. [ 60 ]
The number of logic gates for the implementation of a CORDIC is roughly comparable to the number required for a multiplier as both require combinations of shifts and additions. The choice for a multiplier-based or CORDIC-based implementation will depend on the context. The multiplication of two complex numbers represented by their real and imaginary components (rectangular coordinates), for example, requires 4 multiplications, but could be realized by a single CORDIC operating on complex numbers represented by their polar coordinates, especially if the magnitude of the numbers is not relevant (multiplying a complex vector with a vector on the unit circle actually amounts to a rotation). CORDICs are often used in circuits for telecommunications such as digital down converters .
In two of the publications by Vladimir Baykov, [ 61 ] [ 62 ] it was proposed to use the double iterations method for the implementation of the functions: arcsine, arccosine, natural logarithm, exponential function, as well as for the calculation of the hyperbolic functions. Double iterations method consists in the fact that unlike the classical CORDIC method, where the iteration step value changes every time, i.e. on each iteration, in the double iteration method, the iteration step value is repeated twice and changes only through one iteration. Hence the designation for the degree indicator for double iterations appeared: i = 0 , 0 , 1 , 1 , 2 , 2 … {\displaystyle i=0,0,1,1,2,2\dots } . Whereas with ordinary iterations: i = 0 , 1 , 2 … {\displaystyle i=0,1,2\dots } . The double iteration method guarantees the convergence of the method throughout the valid range of argument changes.
The generalization of the CORDIC convergence problems for the arbitrary positional number system with radix R {\displaystyle R} showed [ 63 ] that for the functions sine, cosine, arctangent, it is enough to perform R − 1 {\displaystyle R-1} iterations for each value of i (i = 0 or 1 to n, where n is the number of digits), i.e. for each digit of the result. For the natural logarithm, exponential, hyperbolic sine, cosine and arctangent, R {\displaystyle R} iterations should be performed for each value i {\displaystyle i} . For the functions arcsine and arccosine, two R − 1 {\displaystyle R-1} iterations should be performed for each number digit, i.e. for each value of i {\displaystyle i} . [ 63 ]
For inverse hyperbolic sine and arcosine functions, the number of iterations will be 2 R {\displaystyle 2R} for each i {\displaystyle i} , that is, for each result digit.
CORDIC is part of the class of "shift-and-add" algorithms , as are the logarithm and exponential algorithms derived from Henry Briggs' work. Another shift-and-add algorithm which can be used for computing many elementary functions is the BKM algorithm , which is a generalization of the logarithm and exponential algorithms to the complex plane. For instance, BKM can be used to compute the sine and cosine of a real angle x {\displaystyle x} (in radians) by computing the exponential of 0 + i x {\displaystyle 0+ix} , which is cis ( x ) = cos ( x ) + i sin ( x ) {\displaystyle \operatorname {cis} (x)=\cos(x)+i\sin(x)} . The BKM algorithm is slightly more complex than CORDIC, but has the advantage that it does not need a scaling factor ( K ).
|
https://en.wikipedia.org/wiki/Volder's_algorithm
|
The Volhard–Erdmann cyclization is an organic synthesis of alkyl and aryl thiophenes by cyclization of disodium succinate or other 1,4-difunctional compounds (γ-oxo acids, 1,4-diketones, chloroacetyl-substituted esters) with phosphorus heptasulfide . The reaction is named after Jacob Volhard and Hugo Erdmann . [ 1 ]
An example is the synthesis of 3-methylthiophene starting from itaconic acid : [ 2 ]
|
https://en.wikipedia.org/wiki/Volhard–Erdmann_cyclization
|
Volker Halbach (born 21 October 1965 in Ingolstadt, Germany ) is a German logician and philosopher . His main research interests are in philosophical logic , philosophy of mathematics , philosophy of language , and epistemology , with a focus on formal theories of truth . He is Professor of Philosophy at the University of Oxford , Tutorial Fellow of New College, Oxford . [ 1 ]
Volker Halbach's philosophical studies began at Ludwig-Maximilians-Universität München . He graduated in 1991 with an M.A. ( Master of Arts ) and in 1994 with a doctorate in philosophy ( D.Phil. , summa cum laude) with a dissertation titled "Tarski-Hierarchien". [ 2 ] In 2001 he earned his habilitation with a thesis on "Semantics and Deflationism".
Halbach was an assistant professor at Universität Konstanz (1997-2004).
In 2004, he took up at role at New College, University of Oxford, where he teaches logic-related courses including Introduction to Logic and Elements of Deductive Logic in the first year, Philosophical Logic, Formal Logic, Philosophy of Logic & Language, and Philosophy of Mathematics.
He served as Vice-President of the British Logic Colloquium until 2022.
Halbach is author of several articles and books including The Logic Manual , a textbook on undergraduate logic, and Axiomatic Theories of Truth . [ 3 ]
|
https://en.wikipedia.org/wiki/Volker_Halbach
|
Voltage , also known as ( electrical ) potential difference , electric pressure , or electric tension , is the difference in electric potential between two points. [ 1 ] [ 2 ] In a static electric field , it corresponds to the work needed per unit of charge to move a positive test charge from the first point to the second point. In the International System of Units (SI), the derived unit for voltage is the volt ( V ). [ 3 ] [ 4 ] [ 5 ]
The voltage between points can be caused by the build-up of electric charge (e.g., a capacitor ), and from an electromotive force (e.g., electromagnetic induction in a generator ). [ 6 ] [ 7 ] On a macroscopic scale, a potential difference can be caused by electrochemical processes (e.g., cells and batteries), the pressure-induced piezoelectric effect , and the thermoelectric effect . Since it is the difference in electric potential, it is a physical scalar quantity . [ 8 ]
A voltmeter can be used to measure the voltage between two points in a system. [ 9 ] Often a common reference potential such as the ground of the system is used as one of the points. In this case, voltage is often mentioned at a point without completely mentioning the other measurement point. A voltage can be associated with either a source of energy or the loss, dissipation, or storage of energy.
The SI unit of work per unit charge is the joule per coulomb , where 1 volt = 1 joule (of work) per 1 coulomb of charge. [ citation needed ] The old SI definition for volt used power and current ; starting in 1990, the quantum Hall and Josephson effect were used, [ 10 ] and in 2019 physical constants were given defined values for the definition of all SI units.
Voltage is denoted symbolically by Δ V {\displaystyle \Delta V} , simplified V , [ 11 ] especially in English -speaking countries. Internationally, the symbol U is standardized. [ 12 ] It is used, for instance, in the context of Ohm's or Kirchhoff's circuit laws .
The electrochemical potential is the voltage that can be directly measured with a voltmeter. [ 13 ] [ 14 ] The Galvani potential that exists in structures with junctions of dissimilar materials, is also work per charge but cannot be measured with a voltmeter in the external circuit (see § Galvani potential vs. electrochemical potential ).
Voltage is defined so that negatively charged objects are pulled towards higher voltages, while positively charged objects are pulled towards lower voltages. [ 15 ] [ 16 ] Therefore, the conventional current in a wire or resistor always flows from higher voltage to lower voltage.
Historically, voltage has been referred to using terms like "tension" and "pressure". Even today, the term "tension" is still used, for example within the phrase " high tension " (HT) which is commonly used in the contexts of automotive electronics and systems using thermionic valves ( vacuum tubes ).
In electrostatics , the voltage increase from point r A {\displaystyle \mathbf {r} _{A}} to some point r B {\displaystyle \mathbf {r} _{B}} is given by the change in electrostatic potential V {\textstyle V} from r A {\displaystyle \mathbf {r} _{A}} to r B {\displaystyle \mathbf {r} _{B}} . By definition, [ 17 ] : 78 this is:
where E {\displaystyle \mathbf {E} } is the intensity of the electric field.
In this case, the voltage increase from point A to point B is equal to the work done per unit charge, against the electric field, to move the charge from A to B without causing any acceleration. [ 17 ] : 90–91 Mathematically, this is expressed as the line integral of the electric field along that path. In electrostatics, this line integral is independent of the path taken. [ 17 ] : 91
Under this definition, any circuit where there are time-varying magnetic fields, such as AC circuits , will not have a well-defined voltage between nodes in the circuit, since the electric force is not a conservative force in those cases. [ note 1 ] However, at lower frequencies when the electric and magnetic fields are not rapidly changing, this can be neglected (see electrostatic approximation ).
The electric potential can be generalized to electrodynamics, so that differences in electric potential between points are well-defined even in the presence of time-varying fields. However, unlike in electrostatics, the electric field can no longer be expressed only in terms of the electric potential. [ 17 ] : 417 Furthermore, the potential is no longer uniquely determined up to a constant, and can take significantly different forms depending on the choice of gauge . [ note 2 ] [ 17 ] : 419–422
In this general case, some authors [ 18 ] use the word "voltage" to refer to the line integral of the electric field, rather than to differences in electric potential. In this case, the voltage rise along some path P {\displaystyle {\mathcal {P}}} from r A {\displaystyle \mathbf {r} _{A}} to r B {\displaystyle \mathbf {r} _{B}} is given by:
However, in this case the "voltage" between two points depends on the path taken.
In circuit analysis and electrical engineering , lumped element models are used to represent and analyze circuits. These elements are idealized and self-contained circuit elements used to model physical components. [ 19 ]
When using a lumped element model, it is assumed that the effects of changing magnetic fields produced by the circuit are suitably contained to each element. [ 19 ] Under these assumptions, the electric field in the region exterior to each component is conservative, and voltages between nodes in the circuit are well-defined, where [ 19 ]
as long as the path of integration does not pass through the inside of any component. The above is the same formula used in electrostatics. This integral, with the path of integration being along the test leads, is what a voltmeter will actually measure. [ 20 ] [ note 3 ]
If uncontained magnetic fields throughout the circuit are not negligible, then their effects can be modelled by adding mutual inductance elements. In the case of a physical inductor though, the ideal lumped representation is often accurate. This is because the external fields of inductors are generally negligible, especially if the inductor has a closed magnetic path . If external fields are negligible, we find that
is path-independent, and there is a well-defined voltage across the inductor's terminals. [ 21 ] This is the reason that measurements with a voltmeter across an inductor are often reasonably independent of the placement of the test leads.
The volt (symbol: V ) is the derived unit for electric potential , voltage, and electromotive force . [ 22 ] [ 23 ] The volt is named in honour of the Italian physicist Alessandro Volta (1745–1827), who invented the voltaic pile , possibly the first chemical battery .
A simple analogy for an electric circuit is water flowing in a closed circuit of pipework , driven by a mechanical pump . [ citation needed ] This can be called a "water circuit". The potential difference between two points corresponds to the pressure difference between two points. If the pump creates a pressure difference between two points, then water flowing from one point to the other will be able to do work, such as driving a turbine . Similarly, work can be done by an electric current driven by the potential difference provided by a battery . For example, the voltage provided by a sufficiently-charged automobile battery can "push" a large current through the windings of an automobile's starter motor . If the pump is not working, it produces no pressure difference, and the turbine will not rotate. Likewise, if the automobile's battery is very weak or "dead" (or "flat"), then it will not turn the starter motor.
The hydraulic analogy is a useful way of understanding many electrical concepts. In such a system, the work done to move water is equal to the " pressure drop" (compare p.d.) multiplied by the volume of water moved. Similarly, in an electrical circuit, the work done to move electrons or other charge carriers is equal to "electrical pressure difference" multiplied by the quantity of electrical charges moved. In relation to "flow", the larger the "pressure difference" between two points (potential difference or water pressure difference), the greater the flow between them (electric current or water flow). (See " electric power ".)
Specifying a voltage measurement requires explicit or implicit specification of the points across which the voltage is measured. When using a voltmeter to measure voltage, one electrical lead of the voltmeter must be connected to the first point, one to the second point.
A common use of the term "voltage" is in describing the voltage dropped across an electrical device (such as a resistor). The voltage drop across the device can be understood as the difference between measurements at each terminal of the device with respect to a common reference point (or ground ). The voltage drop is the difference between the two readings. Two points in an electric circuit that are connected by an ideal conductor without resistance and not within a changing magnetic field have a voltage of zero. Any two points with the same potential may be connected by a conductor and no current will flow between them.
The voltage between A and C is the sum of the voltage between A and B and the voltage between B and C . The various voltages in a circuit can be computed using Kirchhoff's circuit laws .
When talking about alternating current (AC) there is a difference between instantaneous voltage and average voltage. Instantaneous voltages can be added for direct current (DC) and AC, but average voltages can be meaningfully added only when they apply to signals that all have the same frequency and phase.
Instruments for measuring voltages include the voltmeter , the potentiometer , and the oscilloscope . Analog voltmeters , such as moving-coil instruments, work by measuring the current through a fixed resistor, which, according to Ohm's law , is proportional to the voltage across the resistor. The potentiometer works by balancing the unknown voltage against a known voltage in a bridge circuit . The cathode-ray oscilloscope works by amplifying the voltage and using it to deflect an electron beam from a straight path, so that the deflection of the beam is proportional to the voltage.
A common voltage for flashlight batteries is 1.5 volts (DC).
A common voltage for automobile batteries is 12 volts (DC).
Common voltages supplied by power companies to consumers are 110 to 120 volts (AC) and 220 to 240 volts (AC). The voltage in electric power transmission lines used to distribute electricity from power stations can be several hundred times greater than consumer voltages, typically 110 to 1200 kV (AC).
The voltage used in overhead lines to power railway locomotives is between 12 kV and 50 kV (AC) or between 0.75 kV and 3 kV (DC).
Inside a conductive material, the energy of an electron is affected not only by the average electric potential but also by the specific thermal and atomic environment that it is in.
When a voltmeter is connected between two different types of metal, it measures not the electrostatic potential difference, but instead something else that is affected by thermodynamics. [ 24 ] The quantity measured by a voltmeter is the negative of the difference of the electrochemical potential of electrons ( Fermi level ) divided by the electron charge and commonly referred to as the voltage difference, while the pure unadjusted electrostatic potential (not measurable with a voltmeter) is sometimes called Galvani potential .
The terms "voltage" and "electric potential" are ambiguous in that, in practice, they can refer to either of these in different contexts.
The term electromotive force was first used by Volta in a letter to Giovanni Aldini in 1798, and first appeared in a published paper in 1801 in Annales de chimie et de physique . [ 25 ] : 408 Volta meant by this a force that was not an electrostatic force, specifically, an electrochemical force. [ 25 ] : 405 The term was taken up by Michael Faraday in connection with electromagnetic induction in the 1820s. However, a clear definition of voltage and method of measuring it had not been developed at this time. [ 26 ] : 554 Volta distinguished electromotive force (emf) from tension (potential difference): the observed potential difference at the terminals of an electrochemical cell when it was open circuit must exactly balance the emf of the cell so that no current flowed. [ 25 ] : 405
|
https://en.wikipedia.org/wiki/Voltage
|
A voltage-controlled oscillator ( VCO ) is an electronic oscillator whose oscillation frequency is controlled by a voltage input. The applied input voltage determines the instantaneous oscillation frequency. Consequently, a VCO can be used for frequency modulation (FM) or phase modulation (PM) by applying a modulating signal to the control input. A VCO is also an integral part of a phase-locked loop . VCOs are used in synthesizers to generate a waveform whose pitch can be adjusted by a voltage determined by a musical keyboard or other input.
A voltage-to-frequency converter ( VFC ) is a special type of VCO designed to be very linear in frequency control over a wide range of input control voltages. [ 1 ] [ 2 ] [ 3 ]
VCOs can be generally categorized into two groups based on the type of waveform produced. [ 4 ]
A voltage-controlled capacitor is one method of making an LC oscillator vary its frequency in response to a control voltage. Any reverse-biased semiconductor diode displays a measure of voltage-dependent capacitance and can be used to change the frequency of an oscillator by varying a control voltage applied to the diode. Special-purpose variable-capacitance varactor diodes are available with well-characterized wide-ranging values of capacitance. A varactor is used to change the capacitance (and hence the frequency) of an LC tank. A varactor can also change loading on a crystal resonator and pull its resonant frequency.
The same effect occurs with bipolar transistors , as described by Donald [ 5 ] E. Thomas at Bell Labs in 1954: with a tank circuit connected to the collector and the modulating audio signal applied between the emitter and the base, a single-transistor FM transmitter is created. [ 6 ] Thomas worked with a point-contact transistor , but the effect also works in junction transistors ; applications include wireless microphones such as that patented by Raymond A. Litke in 1964. [ 7 ]
For low-frequency VCOs, other methods of varying the frequency (such as altering the charging rate of a capacitor by means of a voltage-controlled current source ) are used (see function generator ).
The frequency of a ring oscillator is controlled by varying either the supply voltage, the current available to each inverter stage, or the capacitive loading on each stage.
VCOs are used in analog applications such as frequency modulation and frequency-shift keying . The functional relationship between the control voltage and the output frequency for a VCO (especially those used at radio frequency ) may not be linear, but over small ranges, the relationship is approximately linear, and linear control theory can be used. A voltage-to-frequency converter (VFC) is a special type of VCO designed to be very linear over a wide range of input voltages.
Modeling for VCOs is often not concerned with the amplitude or shape (sinewave, triangle wave, sawtooth) but rather its instantaneous phase. In effect, the focus is not on the time-domain signal A sin( ωt + θ 0 ) but rather the argument of the sine function (the phase). Consequently, modeling is often done in the phase domain.
The instantaneous frequency of a VCO is often modeled as a linear relationship with its instantaneous control voltage. The output phase of the oscillator is the integral of the instantaneous frequency.
For analyzing a control system, the Laplace transforms of the above signals are useful.
Tuning range, tuning gain and phase noise are the important characteristics of a VCO. Generally, low phase noise is preferred in a VCO. Tuning gain and noise present in the control signal affect the phase noise; high noise or high tuning gain imply more phase noise. Other important elements that determine the phase noise are sources of flicker noise (1/ f noise) in the circuit, [ 8 ] the output power level, and the loaded Q factor of the resonator. [ 9 ] (see Leeson's equation ). The low frequency flicker noise affects the phase noise because the flicker noise is heterodyned to the oscillator output frequency due to the non-linear transfer function of active devices. The effect of flicker noise can be reduced with negative feedback that linearizes the transfer function (for example, emitter degeneration ).
VCOs generally have lower Q factor compared to similar fixed-frequency oscillators, and so suffer more jitter . The jitter can be made low enough for many applications (such as driving an ASIC), in which case VCOs enjoy the advantages of having no off-chip components (expensive) or on-chip inductors (low yields on generic CMOS processes).
Commonly used VCO circuits are the Clapp and Colpitts oscillators. The more widely used oscillator of the two is Colpitts and these oscillators are very similar in configuration.
A voltage-controlled crystal oscillator ( VCXO ) is used for fine adjustment of the operating frequency. The frequency of a voltage-controlled crystal oscillator can be varied a few tens of parts per million (ppm) over a control voltage range of typically 0 to 3 volts, because the high Q factor of the crystals allows frequency control over only a small range of frequencies.
A temperature-compensated VCXO ( TCVCXO ) incorporates components that partially correct the dependence on temperature of the resonant frequency of the crystal. A smaller range of voltage control then suffices to stabilize the oscillator frequency in applications where temperature varies, such as heat buildup inside a transmitter .
Placing the oscillator in a crystal oven at a constant but higher-than-ambient temperature is another way to stabilize oscillator frequency. High stability crystal oscillator references often place the crystal in an oven and use a voltage input for fine control. [ 10 ] The temperature is selected to be the turnover temperature : the temperature where small changes do not affect the resonance. The control voltage can be used to occasionally adjust the reference frequency to a NIST source. Sophisticated designs may also adjust the control voltage over time to compensate for crystal aging. [ citation needed ]
A clock generator is an oscillator that provides a timing signal to synchronize operations in digital circuits. VCXO clock generators are used in many areas such as digital TV, modems, transmitters and computers. Design parameters for a VCXO clock generator are tuning voltage range, center frequency, frequency tuning range and the timing jitter of the output signal. Jitter is a form of phase noise that must be minimised in applications such as radio receivers, transmitters and measuring equipment.
When a wider selection of clock frequencies is needed the VCXO output can be passed through digital divider circuits to obtain lower frequencies or be fed to a phase-locked loop (PLL). ICs containing both a VCXO (for external crystal) and a PLL are available. A typical application is to provide clock frequencies in a range from 12 kHz to 96 kHz to an audio digital-to-analog converter .
A frequency synthesizer generates precise and adjustable frequencies based on a stable single-frequency clock. A digitally controlled oscillator based on a frequency synthesizer may serve as a digital alternative to analog voltage controlled oscillator circuits.
VCOs are used in function generators , phase-locked loops including frequency synthesizers used in communication equipment and the production of electronic music , to generate variable tones in synthesizers .
Function generators are low-frequency oscillators which feature multiple waveforms, typically sine, square, and triangle waves. Monolithic function generators are voltage-controlled.
Analog phase-locked loops typically contain VCOs. High-frequency VCOs are usually used in phase-locked loops for radio receivers. Phase noise is the most important specification in this application. [ citation needed ]
Audio-frequency VCOs are used in analog music synthesizers. For these, sweep range, linearity, and distortion are often the most important specifications. Audio-frequency VCOs for use in musical contexts were largely superseded in the 1980s by their digital counterparts, digitally controlled oscillators (DCOs), due to their output stability in the face of temperature changes during operation. Since the 1990s, musical software has become the dominant sound-generating method.
Voltage-to-frequency converters are voltage-controlled oscillators with a highly linear relation between applied voltage and frequency. They are used to convert a slow analog signal (such as from a temperature transducer) to a signal suitable for transmission over a long distance, since the frequency will not drift or be affected by noise. Oscillators in this application may have sine or square wave outputs.
Where the oscillator drives equipment that may generate radio-frequency interference, adding a varying voltage to its control input, called dithering , [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ excessive citations ] can disperse the interference spectrum to make it less objectionable (see spread spectrum clock ).
|
https://en.wikipedia.org/wiki/Voltage-controlled_oscillator
|
Voltage-gated proton channels are ion channels that have the unique property of opening with depolarization , but in a strongly pH -sensitive manner. [ 1 ] The result is that these channels open only when the electrochemical gradient is outward, such that their opening will only allow protons to leave cells . Their function thus appears to be acid extrusion from cells. [ 2 ]
Another important function occurs in phagocytes (e.g. eosinophils , neutrophils , and macrophages ) during the respiratory burst . When bacteria or other microbes are engulfed by phagocytes, the enzyme NADPH oxidase assembles in the membrane and begins to produce reactive oxygen species (ROS) that help kill bacteria. [ 3 ] NADPH oxidase is electrogenic, [ 4 ] moving electrons across the membrane, and proton channels open to allow proton flux to balance the electron movement electrically. [ 5 ] The functional expression of Hv1 in phagocytes has been well characterized in mammals, and recently in zebrafish, [ 6 ] suggesting its important roles in the immune cells of mammals and non-mammalian vertebrates.
A group of small molecule inhibitors of the Hv1 channel are shown as chemotherapeutics and anti-inflammatory agents. [ 7 ]
When activated, the voltage-gated proton channel Hv1 can allow up to 100,000 hydrogen ions across the membrane each second. [ 8 ] Whereas most voltage-gated ion channels contain a central pore that is surrounding by alpha helices and the voltage-sensing domain (VSD), voltage-gated hydrogen channels contain no central pore, [ 9 ] so their voltage-sensing regions (VSD) carry out the job of bringing acidic protons across the membrane. Because the relative H+ concentrations on each side of the membrane result in a pH gradient, these voltage-gated hydrogen channels only carry outward current, meaning they are used to move acidic protons out of the membrane. As a result, the opening of voltage-gated hydrogen channels usually hyperpolarize the cell membrane, or makes the membrane potential more negative. [ 10 ]
A recent discovery has shown that the voltage-gated proton channel Hv1 is highly expressed in human breast tumor tissues that are metastatic, but not in non-metastatic breast cancer tissues. [ 11 ] Because it has also been found to be highly expressed in other cancer tissues, [ 12 ] the study of the voltage-gated proton channel has led many scientists to wonder what its importance is in cancer metastasis. However, much is still being discovered concerning the structure and function of the voltage-gated proton channel.
|
https://en.wikipedia.org/wiki/Voltage-gated_proton_channel
|
Voltage-sensitive dyes , also known as potentiometric dyes , are dyes which change their spectral properties in response to voltage changes. [ 1 ] They are able to provide linear measurements of firing activity of single neurons , large neuronal populations or activity of myocytes . Many physiological processes are accompanied by changes in cell membrane potential which can be detected with voltage sensitive dyes. Measurements may indicate the site of action potential origin, and measurements of action potential velocity and direction may be obtained. [ 2 ]
Potentiometric dyes are used to monitor the electrical activity inside cell organelles where it is not possible to insert an electrode , such as the mitochondria and dendritic spine . This technology is especially powerful for the study of patterns of activity in complex multicellular preparations. It also makes possible the measurement of spatial and temporal variations in membrane potential along the surface of single cells.
Fast-response probes: These are amphiphilic membrane staining dyes which usually have a pair of hydrocarbon chains acting as membrane anchors and a hydrophilic group which aligns the chromophore perpendicular to the membrane/aqueous interface. The chromophore is believed to undergo a large electronic charge shift as a result of excitation from the ground to the excited state and this underlies the putative electrochromic mechanism for the sensitivity of these dyes to membrane potential. This molecule (dye) intercalates among the lipophilic part of biological membranes . This orientation assures that the excitation induced charge redistribution will occur parallel to the electric field within the membrane. A change in the voltage across the membrane will therefore cause a spectral shift resulting from a direct interaction between the field and the ground and excited state dipole moments .
New voltage dyes can sense voltage with high speed and sensitivity using photoinduced electron transfer (PeT) through a conjugated molecular wire. [ 3 ] [ 4 ]
Slow-response probes: These exhibit potential-dependent changes in their transmembrane distribution which are accompanied by a fluorescence change. Typical slow-response probes include cationic carbocyanines and rhodamines , and ionic oxonols .
Commonly used voltage sensitive dyes are substituted aminonaphthylethenylpyridinium (ANEP) dyes, such as di-4-ANEPPS, di-8-ANEPPS, and RH237. Depending on their chemical modifications which change their physical properties they are used for different experimental procedures. [ 5 ] They were first described in 1985 by the research group of Leslie Loew. [ 6 ] ANNINE-6plus is a voltage sensitive dye with fast response (ns response time ) and high sensitivity. It has been applied to measure the action potentials of a single t-tubule of cardiomyocytes by Guixue Bu et al. [ 7 ] More recently, a series of fluorinated ANEP dyes was introduced that offer enhanced sensitivity and photostability; they are also available over a wide choice of excitation and emission wavelengths. [ 8 ] A recent computational study confirmed that the ANEP dyes are affected only by the electrostatic environment and not by specific molecular interactions. [ 9 ] Other structural scaffolds, such as xanthenes, [ 10 ] are also successfully used.
The core material for imaging brain activity with voltage-sensitive dyes are the dyes themselves. These voltage-sensitive dyes are lipophilic and preferably localized in membranes with their hydrophobic tails. They are used in applications involving fluorescence or absorption; they are fast acting and are able to provide linear measurements of changes in membrane potential. [ 11 ] Voltage sensitive dyes are supplied by many companies who offer fluorescent probes for biological applications. Potentiometric Probes, LLC specializes only in voltage sensitive dyes; they have an exclusive license to distribute the large set of fluorinated VSDs, marketed under the ElectroFluor brand.
A variety of specialized equipment may be used in conjunction with the dyes, and choices in equipment will vary according to the particularities of a preparation. Essentially, equipment will include specialized microscopes and imaging devices, and may include technical lamps or lasers. [ 11 ]
Strengths of imaging brain activity with voltage-sensitive dyes include the following abilities:
Weaknesses of imaging brain activity with voltage-sensitive dyes include the following problems:
Voltage-sensitive dyes have been used to measure neural activity in several areas of the nervous system in a variety of organisms, including the squid giant axon , [ 19 ] whisker barrels of the rat somatosensory cortex, [ 20 ] [ 21 ] olfactory bulb of the salamander, [ 22 ] [ 23 ] [ 24 ] visual cortex of the cat, [ 25 ] optic tectum of the frog, [ 26 ] and the visual cortex of the rhesus monkey . [ 27 ] [ 28 ]
Many applications in cardiac electrophysiology have been published, including ex vivo mapping of electrical activity in whole hearts from various animal species, [ 29 ] [ 30 ] subcellular imaging from single cardiomyocytes, [ 31 ] and even mapping both sinus rhythms and arrhytmias in open heart in vivo pig, [ 18 ] where motion artifacts could be eliminated by dual wavelength ratio imaging of the voltage sensitive dye fluorescence.
|
https://en.wikipedia.org/wiki/Voltage-sensitive_dye
|
The voltage clamp is an experimental method used by electrophysiologists to measure the ion currents through the membranes of excitable cells, such as neurons , while holding the membrane voltage at a set level. [ 1 ] A basic voltage clamp will iteratively measure the membrane potential , and then change the membrane potential (voltage) to a desired value by adding the necessary current. This "clamps" the cell membrane at a desired constant voltage, allowing the voltage clamp to record what currents are delivered. Because the currents applied to the cell must be equal to (and opposite in charge to) the current going across the cell membrane at the set voltage, the recorded currents indicate how the cell reacts to changes in membrane potential. [ 2 ] Cell membranes of excitable cells contain many different kinds of ion channels , some of which are voltage-gated . The voltage clamp allows the membrane voltage to be manipulated independently of the ionic currents, allowing the current–voltage relationships of membrane channels to be studied. [ 3 ]
The concept of the voltage clamp is attributed to Kenneth Cole [ 4 ] and George Marmont [ 5 ] in the spring of 1947. [ 6 ] They inserted an internal electrode into the giant axon of a squid and began to apply a current. Cole discovered that it was possible to use two electrodes and a feedback circuit to keep the cell's membrane potential at a level set by the experimenter.
Cole developed the voltage clamp technique before the era of microelectrodes , so his two electrodes consisted of fine wires twisted around an insulating rod. Because this type of electrode could be inserted into only the largest cells, early electrophysiological experiments were conducted almost exclusively on squid axons.
Squids squirt jets of water when they need to move quickly, as when escaping a predator. To make this escape as fast as possible, they have an axon that can reach 1 mm in diameter (signals propagate more quickly down large axons). The squid giant axon was the first preparation that could be used to voltage clamp a transmembrane current, and it was the basis of Hodgkin and Huxley's pioneering experiments on the properties of the action potential. [ 6 ]
Alan Hodgkin realized that, to understand ion flux across the membrane, it was necessary to eliminate differences in membrane potential. [ 7 ] Using experiments with the voltage clamp, Hodgkin and Andrew Huxley published 5 papers in the summer of 1952 describing how ionic currents give rise to the action potential . [ 8 ] The final paper proposed the Hodgkin–Huxley model which mathematically describes the action potential. The use of voltage clamps in their experiments to study and model the action potential in detail has laid the foundation for electrophysiology ; for which they shared the 1963 Nobel Prize in Physiology or Medicine . [ 7 ]
The voltage clamp is a current generator. Transmembrane voltage is recorded through a "voltage electrode", relative to ground , and a "current electrode" passes current into the cell. The experimenter sets a "holding voltage", or "command potential", and the voltage clamp uses negative feedback to maintain the cell at this voltage. The electrodes are connected to an amplifier, which measures membrane potential and feeds the signal into a feedback amplifier . This amplifier also gets an input from the signal generator that determines the command potential, and it subtracts the membrane potential from the command potential (V command – V m ), magnifies any difference, and sends an output to the current electrode. Whenever the cell deviates from the holding voltage, the operational amplifier generates an "error signal", that is the difference between the command potential and the actual voltage of the cell. The feedback circuit passes current into the cell to reduce the error signal to zero. Thus, the clamp circuit produces a current equal and opposite to the ionic current.
The two-electrode voltage clamp (TEVC) technique is used to study properties of membrane proteins, especially ion channels. [ 9 ] Researchers use this method most commonly to investigate membrane structures expressed in Xenopus oocytes . The large size of these oocytes allows for easy handling and manipulability. [ 10 ]
The TEVC method utilizes two low-resistance pipettes, one sensing voltage and the other injecting current. The microelectrodes are filled with conductive solution and inserted into the cell to artificially control membrane potential. The membrane acts as a dielectric as well as a resistor , while the fluids on either side of the membrane function as capacitors . [ 10 ] The microelectrodes compare the membrane potential against a command voltage, giving an accurate reproduction of the currents flowing across the membrane. Current readings can be used to analyze the electrical response of the cell to different applications.
This technique is favored over single-microelectrode clamp or other voltage clamp techniques when conditions call for resolving large currents. The high current-passing capacity of the two-electrode clamp makes it possible to clamp large currents that are impossible to control with single-electrode patch techniques . [ 11 ] The two-electrode system is also desirable for its fast clamp settling time and low noise. However, TEVC is limited in use with regard to cell size. It is effective in larger-diameter oocytes, but more difficult to use with small cells. Additionally, TEVC method is limited in that the transmitter of current must be contained in the pipette. It is not possible to manipulate the intracellular fluid while clamping, which is possible using patch clamp techniques. [ 2 ] Another disadvantage involves "space clamp" issues. Cole's voltage clamp used a long wire that clamped the squid axon uniformly along its entire length. TEVC microelectrodes can provide only a spatial point source of current that may not uniformly affect all parts of an irregularly shaped cell.
The dual-cell voltage clamp technique is a specialized variation of the two electrode voltage clamp, and is only used in the study of gap junction channels. [ 12 ] Gap junctions are pores that directly link two cells through which ions and small molecules flow freely. When two cells in which gap junction proteins, typically connexins or innexins , are expressed, either endogenously or via injection of mRNA , a junction channel will form between the cells. Since two cells are present in the system, two sets of electrodes are used. A recording electrode and a current injecting electrode are inserted into each cell, and each cell is clamped individually (each set of electrodes is attached to a separate apparatus, and integration of data is performed by computer). To record junctional conductance , the current is varied in the first cell while the recording electrode in the second cell records any changes in V m for the second cell only. (The process can be reversed with the stimulus occurring in the second cell and recording occurring in the first cell.) Since no variation in current is being induced by the electrode in the recorded cell, any change in voltage must be induced by current crossing into the recorded cell, through the gap junction channels, from the cell in which the current was varied. [ 12 ]
This category describes a set of techniques in which one electrode is used for voltage clamp. Continuous single-electrode clamp (SEVC-c) technique is often used with patch-clamp recording. Discontinuous single-electrode voltage-clamp (SEVC-d) technique is used with penetrating intracellular recording. This single electrode carries out the functions of both current injection and voltage recording.
The "patch-clamp" technique allows the study of individual ion channels. It uses an electrode with a relatively large tip (> 1 micrometer) that has a smooth surface (rather than a sharp tip). This is a "patch-clamp electrode" (as distinct from a "sharp electrode" used to impale cells). This electrode is pressed against a cell membrane and suction is applied to pull the cell's membrane inside the electrode tip. The suction causes the cell to form a tight seal with the electrode (a "gigaohm seal", as the resistance is more than a gigaohm ).
SEV-c has the advantage that you can record from small cells that would be impossible to impale with two electrodes. However:
A single-electrode voltage clamp — discontinuous, or SEVC-d, has some advantages over SEVC-c for whole-cell recording. In this, a different approach is taken for passing current and recording voltage. A SEVC-d amplifier operates on a " time-sharing " basis, so the electrode regularly and frequently switches between passing current and measuring voltage. In effect, there are two electrodes, but each is in operation for only half of the time it is on. The oscillation between the two functions of the single electrode is termed a duty cycle. During each cycle, the amplifier measures the membrane potential and compares it with the holding potential. An operational amplifier measures the difference, and generates an error signal. This current is a mirror image of the current generated by the cell. The amplifier outputs feature sample and hold circuits, so each briefly sampled voltage is then held on the output until the next measurement in the next cycle. To be specific, the amplifier measures voltage in the first few microseconds of the cycle, generates the error signal, and spends the rest of the cycle passing current to reduce that error. At the start of the next cycle, voltage is measured again, a new error signal generated, current passed etc. The experimenter sets the cycle length, and it is possible to sample with periods as low as about 15 microseconds, corresponding to a 67 kHz switching frequency. Switching frequencies lower than about 10 kHz are not sufficient when working with action potentials that are less than 1 millisecond wide. Note that not all discontinuous voltage-clamp amplifier support switching frequencies higher than 10 kHz. [ 10 ]
For this to work, the cell capacitance must be higher than the electrode capacitance by at least an order of magnitude . Capacitance slows the kinetics (the rise and fall times) of currents. If the electrode capacitance is much less than that of the cell, then when current is passed through the electrode, the electrode voltage will change faster than the cell voltage. Thus, when current is injected and then turned off (at the end of a duty cycle), the electrode voltage will decay faster than the cell voltage. As soon as the electrode voltage asymptotes to the cell voltage, the voltage can be sampled (again) and the next amount of charge applied. Thus, the frequency of the duty cycle is limited to the speed at which the electrode voltage rises and decays while passing current. The lower the electrode capacitance the faster one can cycle.
SEVC-d has a major advantage over SEVC-c in allowing the experimenter to measure membrane potential, and, as it obviates passing current and measuring voltage at the same time, there is never a series resistance error. The main disadvantages are that the time resolution is limited and the amplifier is unstable. If it passes too much current, so that the goal voltage is over-shot, it reverses the polarity of the current in the next duty cycle. This causes it to undershoot the target voltage, so the next cycle reverses the polarity of the injected current again. This error can grow with each cycle until the amplifier oscillates out of control (“ringing”); this usually results in the destruction of the cell being recorded. The investigator wants a short duty cycle to improve temporal resolution; the amplifier has adjustable compensators that will make the electrode voltage decay faster, but, if these are set too high the amplifier will ring, so the investigator is always trying to “tune” the amplifier as close to the edge of uncontrolled oscillation as possible, in which case small changes in recording conditions can cause ringing. There are two solutions: to “back off” the amplifier settings into a safe range, or to be alert for signs that the amplifier is about to ring.
From the point of view of control theory , the voltage clamp experiment can be described in terms of the application of a high-gain output feedback control law [ 13 ] to the neuronal membrane. [ 14 ] Mathematically, the membrane voltage can be modeled by a conductance-based model with an input given by the applied current I a p p ( t ) {\displaystyle I_{app}(t)} and an output given by the membrane voltage V ( t ) {\displaystyle V(t)} . Hodgkin and Huxley's original conductance-based model, which represents a neuronal membrane containing sodium and potassium ion currents , as well as a leak current , is given by the system of ordinary differential equations
C m d V d t = − g ¯ K n 4 ( V − V K ) − g ¯ Na m 3 h ( V − V Na ) − g ¯ L ( V − V L ) + I a p p {\displaystyle C_{m}{\frac {dV}{dt}}=-{\bar {g}}_{\text{K}}n^{4}(V-V_{\text{K}})-{\bar {g}}_{\text{Na}}m^{3}h(V-V_{\text{Na}})-{\bar {g}}_{\text{L}}(V-V_{L})+I_{app}}
d p d t = α p ( V ) ( 1 − p ) − β p ( V ) p , p = m , n , h {\displaystyle {\frac {dp}{dt}}=\alpha _{p}(V)(1-p)-\beta _{p}(V)p,\quad p=m,n,h}
where C m {\displaystyle C_{m}} is the membrane capacitance, g ¯ Na {\displaystyle {\bar {g}}_{\text{Na}}} , g ¯ K {\displaystyle {\bar {g}}_{\text{K}}} and g ¯ L {\displaystyle {\bar {g}}_{\text{L}}} are maximal conductances, V Na {\displaystyle V_{\text{Na}}} , V K {\displaystyle V_{\text{K}}} and V L {\displaystyle V_{\text{L}}} are reversal potentials, α p {\displaystyle \alpha _{p}} and β p {\displaystyle \beta _{p}} are ion channel voltage-dependent rate constants, and the state variables m {\displaystyle m} , h {\displaystyle h} , and n {\displaystyle n} are ion channel gating variables .
It is possible to rigorously show that the feedback law
I a p p ( t ) = k ( V ref − V ( t ) ) {\displaystyle I_{app}(t)=k(V_{\text{ref}}-V(t))}
drives the membrane voltage V ( t ) {\displaystyle V(t)} arbitrarily close to the reference voltage V ref {\displaystyle V_{\text{ref}}} as the gain k > 0 {\displaystyle k>0} is increased to an arbitrarily large value. [ 14 ] This fact, which is by no means a general property of dynamical systems (a high-gain can, in general, lead to instability [ 15 ] ), is a consequence of the structure and the properties of the conductance-based model above. In particular, the dynamics of each gating variable p = m , h , n {\displaystyle p=m,h,n} , which are driven by V {\displaystyle V} , verify the strong stability property of exponential contraction. [ 14 ] [ 16 ]
|
https://en.wikipedia.org/wiki/Voltage_clamp
|
A voltameter or coulometer is a scientific instrument used for measuring electric charge (quantity of electricity) through electrolytic action. The SI unit of electric charge is the coulomb .
The voltameter should not be confused with a voltmeter , which measures electric potential. The SI unit for electric potential is the volt .
Michael Faraday used an apparatus that he termed a "volta-electrometer"; subsequently John Frederic Daniell called this a "voltameter". [ 1 ]
The voltameter is an electrolytic cell and the measurement is made by weighing the element deposited or released at the cathode in a specified time.
This is the most accurate type. It consists of two silver plates in a solution of silver nitrate . When current is flowing, silver dissolves at the anode and is deposited at the cathode . The cathode is initially massed, current is passed for a measured time and the cathode is massed again.
This is similar to the silver voltameter but the anode and cathode are copper and the solution is copper sulfate , acidified with sulfuric acid . It is cheaper than the silver voltameter, but slightly less accurate.
In this device, mercury is used to determine the amount of charges transformed during the following reaction:
These oxidation/reduction processes have 100% efficiency with the wide range of the current densities. Measuring of the quantity of electricity ( coulombs ) is based on the changes of the mass of the mercury electrode. Mass of the electrode can be increased during cathodic deposition of the mercury ions or decreased during the anodic dissolution of the metal.
The anode and cathode are platinum and the solution is dilute sulfuric acid. Hydrogen is released at the cathode and collected in a graduated tube so that its volume can be measured. The volume is adjusted to standard temperature and pressure and the mass of hydrogen is calculated from the volume. This kind of voltameter is sometimes called Hofmann voltameter .
A coulometer is a device to determine electric charges . The term comes from the unit of charge, the coulomb . There can be two goals in measuring charge:
Coulometers can be devices that are used to determine an amount of substance by measuring the charges. The devices do a quantitative analysis . This method is called coulometry , and related coulometers are either devices used for a coulometry or instruments that perform a coulometry in an automatic way.
Coulometers can be used to determine electric quantities in the direct current circuit, namely the total charge or a constant current. These devices invented by Michael Faraday were used frequently in the 19th century and in the first half of the 20th century. In the past, the coulometers of that type were named voltameters.
|
https://en.wikipedia.org/wiki/Voltameter
|
The Volterra series is a model for non-linear behavior similar to the Taylor series . It differs from the Taylor series in its ability to capture "memory" effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of the system depends strictly on the input at that particular time. In the Volterra series, the output of the nonlinear system depends on the input to the system at all other times. This provides the ability to capture the "memory" effect of devices like capacitors and inductors .
It has been applied in the fields of medicine ( biomedical engineering ) and biology, especially neuroscience . [ 1 ] It is also used in electrical engineering to model intermodulation distortion in many devices, including power amplifiers [ 2 ] and frequency mixers . [ citation needed ] Its main advantage lies in its generalizability: it can represent a wide range of systems. Thus, it is sometimes considered a non-parametric model.
In mathematics , a Volterra series denotes a functional expansion of a dynamic, nonlinear , time-invariant functional . The Volterra series are frequently used in system identification . The Volterra series, which is used to prove the Volterra theorem, is an infinite sum of multidimensional convolutional integrals.
The Volterra series is a modernized version of the theory of analytic functionals from the Italian mathematician Vito Volterra , in his work dating from 1887. [ 3 ] [ 4 ] Norbert Wiener became interested in this theory in the 1920s due to his contact with Volterra's student Paul Lévy . Wiener applied his theory of Brownian motion for the integration of Volterra analytic functionals. The use of the Volterra series for system analysis originated from a restricted 1942 wartime report [ 5 ] of Wiener's, who was then a professor of mathematics at MIT . He used the series to make an approximate analysis of the effect of radar noise in a nonlinear receiver circuit. The report became public after the war. [ 6 ] As a general method of analysis of nonlinear systems, the Volterra series came into use after about 1957 as the result of a series of reports, at first privately circulated, from MIT and elsewhere. [ 7 ] The name itself, Volterra series , came into use a few years later.
The theory of the Volterra series can be viewed from two different perspectives:
The latter functional mapping perspective is more frequently used due to the assumed time-invariance of the system.
A continuous time-invariant system with x ( t ) as input and y ( t ) as output can be expanded in the Volterra series as
Here the constant term h 0 {\displaystyle h_{0}} on the right side is usually taken to be zero by suitable choice of output level y {\displaystyle y} . The function h n ( τ 1 , … , τ n ) {\displaystyle h_{n}(\tau _{1},\dots ,\tau _{n})} is called the n -th-order Volterra kernel . It can be regarded as a higher-order impulse response of the system. For the representation to be unique, the kernels must be symmetrical in the n variables τ {\displaystyle \tau } . If it is not symmetrical, it can be replaced by a symmetrized kernel, which is the average over the n ! permutations of these n variables τ {\displaystyle \tau } .
If N is finite, the series is said to be truncated . If a , b , and N are finite, the series is called doubly finite .
Sometimes the n -th-order term is divided by n !, a convention which is convenient when taking the output of one Volterra system as the input of another ("cascading").
The causality condition : Since in any physically realizable system the output can only depend on previous values of the input, the kernels h n ( t 1 , t 2 , … , t n ) {\displaystyle h_{n}(t_{1},t_{2},\ldots ,t_{n})} will be zero if any of the variables t 1 , t 2 , … , t n {\displaystyle t_{1},t_{2},\ldots ,t_{n}} are negative. The integrals may then be written over the half range from zero to infinity.
So if the operator is causal, a ≥ 0 {\displaystyle a\geq 0} .
Fréchet's approximation theorem : The use of the Volterra series to represent a time-invariant functional relation is often justified by appealing to a theorem due to Fréchet . This theorem states that a time-invariant functional relation (satisfying certain very general conditions) can be approximated uniformly and to an arbitrary degree of precision by a sufficiently high finite-order Volterra series. Among other conditions, the set of admissible input functions x ( t ) {\displaystyle x(t)} for which the approximation will hold is required to be compact . It is usually taken to be an equicontinuous , uniformly bounded set of functions, which is compact by the Arzelà–Ascoli theorem . In many physical situations, this assumption about the input set is a reasonable one. The theorem, however, gives no indication as to how many terms are needed for a good approximation, which is an essential question in applications.
The discrete-time case is similar to the continuous-time case, except that the integrals are replaced by summations:
y ( n ) = h 0 + ∑ p = 1 P ∑ τ 1 = a b ⋯ ∑ τ p = a b h p ( τ 1 , … , τ p ) ∏ j = 1 p x ( n − τ j ) , {\displaystyle y(n)=h_{0}+\sum _{p=1}^{P}\sum _{\tau _{1}=a}^{b}\cdots \sum _{\tau _{p}=a}^{b}h_{p}(\tau _{1},\dots ,\tau _{p})\prod _{j=1}^{p}x(n-\tau _{j}),} where P ∈ { 1 , 2 , … } ∪ { ∞ } . {\displaystyle P\in \{1,2,\dots \}\cup \{\infty \}.} Each function h p {\displaystyle h_{p}} is called a discrete-time Volterra kernels .
If P is finite, the series operator is said to be truncated . If a , b and P are finite, the series operator is called a doubly finite Volterra series . If a ≥ 0 {\displaystyle a\geq 0} , the operator is said to be causal .
We can always consider, without loss of the generality, the kernel h p ( τ 1 , … , τ p ) {\displaystyle h_{p}(\tau _{1},\dots ,\tau _{p})} as symmetrical. In fact, for the commutativity of the multiplication it is always possible to symmetrize it by forming a new kernel taken as the average of the kernels for all permutations of the variables τ 1 , … , τ p {\displaystyle \tau _{1},\dots ,\tau _{p}} .
For a causal system with symmetrical kernels we can rewrite the n -th term approximately in triangular form
Estimating the Volterra coefficients individually is complicated, since the basis functionals of the Volterra series are correlated. This leads to the problem of simultaneously solving a set of integral equations for the coefficients. Hence, estimation of Volterra coefficients is generally performed by estimating the coefficients of an orthogonalized series, e.g. the Wiener series , and then recomputing the coefficients of the original Volterra series. The Volterra series main appeal over the orthogonalized series lies in its intuitive, canonical structure, i.e. all interactions of the input have one fixed degree. The orthogonalized basis functionals will generally be quite complicated.
An important aspect, with respect to which the following methods differ, is whether the orthogonalization of the basis functionals is to be performed over the idealized specification of the input signal (e.g. gaussian, white noise ) or over the actual realization of the input (i.e. the pseudo-random, bounded, almost-white version of gaussian white noise, or any other stimulus). The latter methods, despite their lack of mathematical elegance, have been shown to be more flexible (as arbitrary inputs can be easily accommodated) and precise (due to the effect that the idealized version of the input signal is not always realizable).
This method, developed by Lee and Schetzen, orthogonalizes with respect to the actual mathematical description of the signal, i.e. the projection onto the new basis functionals is based on the knowledge of the moments of the random signal.
We can write the Volterra series in terms of homogeneous operators, as
where
To allow identification orthogonalization, Volterra series must be rearranged in terms of orthogonal non-homogeneous G operators ( Wiener series ):
The G operators can be defined by the following:
whenever H i x ( n ) {\displaystyle H_{i}x(n)} is arbitrary homogeneous Volterra, x ( n ) is some stationary white noise (SWN) with zero mean and variance A .
Recalling that every Volterra functional is orthogonal to all Wiener functional of greater order, and considering the following Volterra functional:
we can write
If x is SWN, τ 1 ≠ τ 2 ≠ … ≠ τ P {\displaystyle \tau _{1}\neq \tau _{2}\neq \ldots \neq \tau _{P}} and by letting A = σ x 2 {\displaystyle A=\sigma _{x}^{2}} , we have
So if we exclude the diagonal elements, τ i ≠ τ j , ∀ i , j {\displaystyle {\tau _{i}\neq \tau _{j},\ \forall i,j}} , it is
If we want to consider the diagonal elements, the solution proposed by Lee and Schetzen is
The main drawback of this technique is that the estimation errors, made on all elements of lower-order kernels, will affect each diagonal element of order p by means of the summation ∑ m = 0 p − 1 G m x ( n ) {\displaystyle \sum \limits _{m=0}^{p-1}G_{m}x(n)} , conceived as the solution for the estimation of the diagonal elements themselves.
Efficient formulas to avoid this drawback and references for diagonal kernel element estimation exist [ 8 ] [ 9 ]
Once the Wiener kernels were identified, Volterra kernels can be obtained by using Wiener-to-Volterra formulas, in the following reported for a fifth-order Volterra series:
In the traditional orthogonal algorithm, using inputs with high σ x {\displaystyle \sigma _{x}} has the advantage of stimulating high-order nonlinearity, so as to achieve more accurate high-order kernel identification.
As a drawback, the use of high σ x {\displaystyle \sigma _{x}} values causes high identification error in lower-order kernels, [ 10 ] mainly due to nonideality of the input and truncation errors.
On the contrary, the use of lower σ x {\displaystyle \sigma _{x}} in the identification process can lead to a better estimation of lower-order kernel, but can be insufficient to stimulate high-order nonlinearity.
This phenomenon, which can be called locality of truncated Volterra series, can be revealed by calculating the output error of a series as a function of different variances of input.
This test can be repeated with series identified with different input variances, obtaining different curves, each with a minimum in correspondence of the variance used in the identification.
To overcome this limitation, a low σ x {\displaystyle \sigma _{x}} value should be used for the lower-order kernel and gradually increased for higher-order kernels.
This is not a theoretical problem in Wiener kernel identification, since the Wiener functional are orthogonal to each other, but an appropriate normalization is needed in Wiener-to-Volterra conversion formulas for taking into account the use of different variances.
Furthermore, new Wiener to Volterra conversion formulas are needed.
The traditional Wiener kernel identification should be changed as follows: [ 10 ]
In the above formulas the impulse functions are introduced for the identification of diagonal kernel points.
If the Wiener kernels are extracted with the new formulas, the following Wiener-to-Volterra formulas (explicited up the fifth order) are needed:
As can be seen, the drawback with respect to the previous formula [ 9 ] is that for the identification of the n -th-order kernel, all lower kernels must be identified again with the higher variance.
However, an outstanding improvement in the output MSE will be obtained if the Wiener and Volterra kernels are obtained with the new formulas. [ 10 ]
This method was developed by Wray and Green (1994) and utilizes the fact that a simple 2-fully connected layer neural network (i.e., a multilayer perceptron ) is computationally equivalent to the Volterra series and therefore contains the kernels hidden in its architecture. After such a network has been trained to successfully predict the output based on the current state and memory of the system, the kernels can then be computed from the weights and biases of that network.
The general notation for the n -th-order Volterra kernel is given by
where n {\displaystyle n} is the order, c i {\displaystyle c_{i}} the weights to the linear output node, a j i {\displaystyle a_{ji}} the coefficients of the polynomial expansion of the output function of the hidden nodes, and ω j i {\displaystyle \omega _{ji}} are the weights from the input layer to the non-linear hidden layer. It is important to note that this method allows kernel extraction up until the number of input delays in the architecture of the network. Furthermore, it is vital to carefully construct the size of the network input layer so that it represents the effective memory of the system.
This method and its more efficient version (fast orthogonal algorithm) were invented by Korenberg. [ 11 ] In this method the orthogonalization is performed empirically over the actual input. It has been shown to perform more precisely than the crosscorrelation method. Another advantage is that arbitrary inputs can be used for the orthogonalization and that fewer data points suffice to reach a desired level of accuracy. Also, estimation can be performed incrementally until some criterion is fulfilled.
Linear regression is a standard tool from linear analysis. Hence, one of its main advantages is the widespread existence of standard tools for solving linear regressions efficiently. It has some educational value, since it highlights the basic property of Volterra series: linear combination of non-linear basis-functionals. For estimation, the order of the original should be known, since the Volterra basis functionals are not orthogonal, and thus estimation cannot be performed incrementally.
This method was invented by Franz and Schölkopf [ 12 ] and is based on statistical learning theory . Consequently, this approach is also based on minimizing the empirical error (often called empirical risk minimization ). Franz and Schölkopf proposed that the kernel method could essentially replace the Volterra series representation, although noting that the latter is more intuitive. [ 13 ]
This method was developed by van Hemmen and coworkers [ 14 ] and utilizes Dirac delta functions to sample the Volterra coefficients.
|
https://en.wikipedia.org/wiki/Volterra_series
|
Voltinism is a term used in biology to indicate the number of broods or generations of an organism in a year. The term is most often applied to insects, and is particularly in use in sericulture , where silkworm varieties vary in their voltinism.
The speckled wood butterfly is univoltine in the northern part of its range, e.g. northern Scandinavia. Adults emerge in late spring, mate, and die shortly after laying eggs; their offspring will grow until pupation , enter diapause in anticipation of the winter, and emerge as adults the following year – thus resulting in a single generation of butterflies per year. In southern Scandinavia, the same species is bivoltine [ 2 ] – here, the offspring of spring-emerging adults will develop directly into adults during the summer, mate, and die. Their offspring in turn constitute a second generation, which is the generation that will enter winter diapause and emerge as adults (and mate) in the spring of the following year. This results in a pattern of one short-lived generation (c. 2–3 months) that breeds during the summer, and one long-lived generation (c. 9–10 months) that diapauses through the winter and breeds in the spring. The Rocky Mountain parnassian and the High brown fritillary are more examples of univoltine butterfly species. [ 2 ] [ 3 ]
The bee species Macrotera portalis is bivoltine, and is estimated to have about 2 or 3 broods annually. During winter, individuals remain in diapause, in their pharate or prepupal stage. This diapause stage continues until metamorphosis in the next spring or summer, whereupon the bees emerge as adults. [ 4 ] Another example of a bivoltine species is Cyclosa turbinata which is known to reproduce once in the late spring and once again in the fall.
The Dawson's burrowing bee is an example of a univoltine insect of the order Hymenoptera . The brood of one winter will remain dormant underground until the following winter, and then will surface from their burrows to mate once, and establish new nests.
The term partial voltinism is used to refer to two different (but not necessarily exclusive) situations:
The number of breeding cycles in a year is under genetic control in many species [ 7 ] and they are evolved in response to the environment. Many phytophagous species that are dependent on seasonal plant resources are univoltine. Some such species have the ability to diapause for a large part of the year, typically during a cold winter. [ 8 ] Others that bore in wood or other low-grade, but plentiful, food material may spend nearly the entire year feeding, with only brief pupal, adult and egg stages to complete a univoltine life cycle. Yet other species that live in tropical regions with little seasonality may be highly multivoltine, with several generations feeding on constantly growing vegetation (such as some species of Saturniidae ), or continually renewed detritus, such as Drosophila and many other genera of flies with a life cycle of just a week or two. [ 9 ]
|
https://en.wikipedia.org/wiki/Voltinism
|
A voltmeter is an instrument used for measuring electric potential difference between two points in an electric circuit . It is connected in parallel . It usually has a high resistance so that it takes negligible current from the circuit.
Analog voltmeters move a pointer across a scale in proportion to the voltage measured and can be built from a galvanometer and series resistor . Meters using amplifiers can measure tiny voltages of microvolts or less. Digital voltmeters give a numerical display of voltage by use of an analog-to-digital converter .
Voltmeters are made in a wide range of styles, some separately powered (e.g. by battery), and others powered by the measured voltage source itself. Instruments permanently mounted in a panel are used to monitor generators or other fixed apparatus. Portable instruments, usually equipped to also measure current and resistance in the form of a multimeter are standard test instruments used in electrical and electronics work. Any measurement that can be converted to a voltage can be displayed on a meter that is suitably calibrated; for example, pressure, temperature, flow or level in a chemical process plant.
General-purpose analog voltmeters may have an accuracy of a few percent of full scale and are used with voltages from a fraction of a volt to several thousand volts. Digital meters can be made with high accuracy, typically better than 1%. Specially calibrated test instruments have higher accuracies, with laboratory instruments capable of measuring to accuracies of a few parts per million. Part of the problem of making an accurate voltmeter is that of calibration to check its accuracy. In laboratories, the Weston cell is used as a standard voltage for precision work. Precision voltage references are available based on electronic circuits.
In circuit diagrams, a voltmeter is represented by the letter V in a circle, with two emerging lines representing the two points of measurement.
A moving coil galvanometer can be used as a voltmeter by inserting a resistor in series with the instrument. The galvanometer has a coil of fine wire suspended in a strong magnetic field. When an electric current is applied, the interaction of the magnetic field of the coil and of the stationary magnet creates a torque, tending to make the coil rotate. The torque is proportional to the current through the coil. The coil rotates, compressing a spring that opposes the rotation. The deflection of the coil is thus proportional to the current, which in turn is proportional to the applied voltage, which is indicated by a pointer on a scale.
One of the design objectives of the instrument is to disturb the circuit as little as possible and so the instrument should draw a minimum of current to operate. This is achieved by using a sensitive galvanometer in series with a high resistance, and then the entire instrument is connected in parallel with the circuit examined.
The sensitivity of such a meter can be expressed as "ohms per volt", the number of ohms resistance in the meter circuit divided by the full scale measured value. For example, a meter with a sensitivity of 1000 ohms per volt would draw 1 milliampere at full scale voltage; if the full scale was 200 volts, the resistance at the instrument's terminals would be 200 000 ohms and at full scale, the meter would draw 1 milliampere from the circuit under test. For multi-range instruments, the input resistance varies as the instrument is switched to different ranges.
Moving-coil instruments with a permanent-magnet field respond only to direct current. Measurement of AC voltage requires a rectifier in the circuit so that the coil deflects in only one direction. Some moving-coil instruments are also made with the zero position in the middle of the scale instead of at one end; these are useful if the voltage reverses its polarity.
Voltmeters operating on the electrostatic principle use the mutual repulsion between two charged plates to deflect a pointer attached to a spring. Meters of this type draw negligible current but are sensitive to voltages over about 100 volts and work with either alternating or direct current.
The sensitivity and input resistance of a voltmeter can be increased if the current required to deflect the meter pointer is supplied by an amplifier and power supply instead of by the circuit under test. The electronic amplifier between input and meter gives two benefits; a rugged moving coil instrument can be used, since its sensitivity need not be high, and the input resistance can be made high, reducing the current drawn from the circuit under test. Amplified voltmeters often have an input resistance of 1, 10, or 20 megohms which is independent of the range selected. A once-popular form of this instrument used a vacuum tube in the amplifier circuit and so was called the vacuum tube voltmeter (VTVM). These were almost always powered by the local AC line current and so were not particularly portable. Today these circuits use a solid-state amplifier using field-effect transistors , hence FET-VM, and appear in handheld digital multimeters as well as in bench and laboratory instruments. These largely replaced non-amplified multimeters except in the least expensive price ranges.
Most VTVMs and FET-VMs handle DC voltage, AC voltage, and resistance measurements; modern FET-VMs add current measurements and often other functions as well. A specialized form of the VTVM or FET-VM is the AC voltmeter. These instruments are optimized for measuring AC voltage. They have much wider bandwidth and better sensitivity than a typical multifunction device.
A digital voltmeter (DVM) measures an unknown input voltage by converting the voltage to a digital value and then displays the voltage in numeric form. DVMs are usually designed around a special type of analog-to-digital converter called an integrating converter .
DVM measurement accuracy is affected by many factors, including temperature, input impedance, and DVM power supply voltage variations. Less expensive DVMs often have input resistance on the order of 10 MΩ. Precision DVMs can have input resistances of 1 GΩ or higher for the lower voltage ranges (e.g. less than 20 V). To ensure that a DVM's accuracy is within the manufacturer's specified tolerances, it must be periodically calibrated against a voltage standard such as the Weston cell .
The first digital voltmeter was invented and produced by Andrew Kay of Non-Linear Systems (and later founder of Kaypro ) in 1954. [ 1 ]
Simple AC voltmeters use a rectifier connected to a DC measurement circuit, which responds to the average value of the waveform. The meter can be calibrated to display the root mean square value of the waveform, assuming a fixed relation between the average value of the rectified waveform and the RMS value. If the waveform departs significantly from the sinewave assumed in the calibration, the meter will be inaccurate, though for simple wave shapes the reading can be corrected by multiplying by a constant factor. Early "true RMS" circuits used a thermal converter that responded only to the RMS value of the waveform. Modern instruments calculate the RMS value by electronically calculating the square of the input value, taking the average, and then calculating the square root of the value. This allows accurate RMS measurements for a variety of waveforms. [ 2 ]
|
https://en.wikipedia.org/wiki/Voltmeter
|
The VolturnUS is a floating concrete structure that supports a wind turbine, designed by University of Maine Advanced Structures and Composites Center and deployed by DeepCwind Consortium in 2013. The VolturnUS can support wind turbines in water depths of 150 ft (46 m) or more.
The DeepCwind Consortium and its partners deployed a 1:8 scale VolturnUS in 2013. Efforts are now underway by Maine Aqua Ventus 1, GP, LLC, to deploy to full-scale VolturnUS structures off the coast of Monhegan Island, Maine, in the UMaine Deepwater Offshore Wind Test Site . This demonstration project, known as New England Aqua Ventus I, is planned to deploy two 6 MW wind turbines by 2020. [ 2 ]
The University of Maine announced in September 2017 that its VolturnUS design became the first floating offshore wind turbine to meet American Bureau of Shipping requirements for floating offshore wind turbines, demonstrating the feasibility of the VolturnUS concept. [ 3 ] The design review was conducted against the American Bureau of Shipping (ABS) Guide for Building and Classing Floating Offshore Wind Turbine Installations. [ 4 ]
North America’s first floating grid-connected wind turbine was lowered into the Penobscot River in Maine on 31 May 2013 by the University of Maine Advanced Structures and Composites Center and its partners. [ 5 ] [ 6 ] [ 7 ] The VolturnUS 1:8 was towed down the Penobscot River where it was deployed for 18 months in Castine, ME, along with a UMaine-developed floating LiDAR . [ 8 ]
The prototype employs a 20 kW Renewegy VP-20 wind turbine with a 9.6 meters (31 feet) rotor. [ 9 ] It is 65 feet (20 meters) tall - that is 1:8 the scale of a 6-megawatt (MW), 450 feet (140 meters) rotor diameter design. [ 10 ] The VolturnUS design utilizes a concrete semi-submersible floating hull and a composite materials tower [ 11 ] [ 12 ] designed to reduce both capital and operation & maintenance costs, and to allow local manufacturing throughout the US and the world. The VolturnUS technology is the culmination of collaborative research and development conducted by the University of Maine-led DeepCwind Consortium. [ 13 ]
During its deployment, it experienced numerous storm events representative of design environmental conditions prescribed by the American Bureau of Shipping Guide for Building and Classing Floating Offshore Wind Turbines, 2013. [ 14 ] [ 15 ] [ 16 ] [ 17 ] It was taken out of the water in November 2014. [ 9 ]
VolturnUS' floating concrete hull technology can support wind turbines in water depths of 45 meters (148 feet) or more, and has the potential to significantly reduce the cost of offshore wind.
With 12 independent cost estimates from around the U.S. and the world, it has been found to significantly reduce costs compared to existing floating systems. The design has also received a complete third-party engineering review. [ 18 ]
In June 2016, the UMaine-led New England Aqua Ventus I project won top tier status from the US Department of Energy (DOE) Advanced Technology Demonstration Program for Offshore Wind. This means that the New England Aqua Ventus project is now automatically eligible for an additional $39.9 million in construction funding from the DOE, as long as the project continues to meet its milestones. The developer asserts that the New England Aqua Ventus I project will likely become the first commercial scale floating wind project in the Americas. [ 18 ]
U.S. Senators Susan Collins and Angus King announced in June 2016 that Maine’s New England Aqua Ventus I floating offshore wind demonstration project was selected by the U.S. Department of Energy to participate in the Offshore Wind Advanced Technology Demonstration program. [ 19 ] The project is opposed by Senator Dow with Bill LR1613. [ 20 ]
New England Aqua Ventus I is one of two leading projects [ 21 ] that are each eligible for up to $39.9 million in additional funding over three years for the construction phase of the demonstration program.
In 2020, UMaine expected costs to be $74/MWh by 2027 and $57/MWh by 2032. [ 22 ] In 2021, Maine applied for an offshore test area. [ 23 ]
|
https://en.wikipedia.org/wiki/VolturnUS
|
In thermodynamics , the volume of a system is an important extensive parameter for describing its thermodynamic state . The specific volume , an intensive property, is the system's volume per unit mass . Volume is a function of state and is interdependent with other thermodynamic properties such as pressure and temperature . For example, volume is related to the pressure and temperature of an ideal gas by the ideal gas law .
The physical region covered by a system may or may not coincide with a control volume used to analyze the system.
The volume of a thermodynamic system typically refers to the volume of the working fluid, such as, for example, the fluid within a piston. Changes to this volume may be made through an application of work , or may be used to produce work. An isochoric process however operates at a constant-volume, thus no work can be produced. Many other thermodynamic processes will result in a change in volume. A polytropic process , in particular, causes changes to the system so that the quantity p V n {\displaystyle pV^{n}} is constant (where p {\displaystyle p} is pressure, V {\displaystyle V} is volume, and n {\displaystyle n} is the polytropic index, a constant). Note that for specific polytropic indexes, a polytropic process will be equivalent to a constant-property process. For instance, for very large values of n {\displaystyle n} approaching infinity, the process becomes constant-volume.
Gases are compressible , thus their volumes (and specific volumes) may be subject to change during thermodynamic processes. Liquids, however, are nearly incompressible, thus their volumes can be often taken as constant. In general, compressibility is defined as the relative volume change of a fluid or solid as a response to a pressure, and may be determined for substances in any phase. Similarly, thermal expansion is the tendency of matter to change in volume in response to a change in temperature.
Many thermodynamic cycles are made up of varying processes, some which maintain a constant volume and some which do not. A vapor-compression refrigeration cycle, for example, follows a sequence where the refrigerant fluid transitions between the liquid and vapor states of matter .
Typical units for volume are m 3 {\displaystyle \mathrm {m^{3}} } (cubic meters ), l {\displaystyle \mathrm {l} } ( liters ), and f t 3 {\displaystyle \mathrm {ft} ^{3}} (cubic feet ).
Mechanical work performed on a working fluid causes a change in the mechanical constraints of the system; in other words, for work to occur, the volume must be altered. Hence, volume is an important parameter in characterizing many thermodynamic processes where an exchange of energy in the form of work is involved.
Volume is one of a pair of conjugate variables , the other being pressure. As with all conjugate pairs, the product is a form of energy. The product p V {\displaystyle pV} is the energy lost to a system due to mechanical work. This product is one term which makes up enthalpy H {\displaystyle H} :
where U {\displaystyle U} is the internal energy of the system.
The second law of thermodynamics describes constraints on the amount of useful work which can be extracted from a thermodynamic system. In thermodynamic systems where the temperature and volume are held constant, the measure of "useful" work attainable is the Helmholtz free energy ; and in systems where the volume is not held constant, the measure of useful work attainable is the Gibbs free energy .
Similarly, the appropriate value of heat capacity to use in a given process depends on whether the process produces a change in volume. The heat capacity is a function of the amount of heat added to a system. In the case of a constant-volume process, all the heat affects the internal energy of the system (i.e., there is no pV-work, and all the heat affects the temperature). However, in a process without a constant volume, the heat addition affects both the internal energy and the work (i.e., the enthalpy); thus the temperature changes by a different amount than in the constant-volume case and a different heat capacity value is required.
Specific volume ( ν {\displaystyle \nu } ) is the volume occupied by a unit of mass of a material. [ 1 ] In many cases, the specific volume is a useful quantity to determine because, as an intensive property, it can be used to determine the complete state of a system in conjunction with another independent intensive variable . The specific volume also allows systems to be studied without reference to an exact operating volume, which may not be known (nor significant) at some stages of analysis.
The specific volume of a substance is equal to the reciprocal of its mass density . Specific volume may be expressed in m 3 k g {\displaystyle {\frac {\mathrm {m^{3}} }{\mathrm {kg} }}} , f t 3 l b {\displaystyle {\frac {\mathrm {ft^{3}} }{\mathrm {lb} }}} , f t 3 s l u g {\displaystyle {\frac {\mathrm {ft^{3}} }{\mathrm {slug} }}} , or m L g {\displaystyle {\frac {\mathrm {mL} }{\mathrm {g} }}} .
where, V {\displaystyle V} is the volume, m {\displaystyle m} is the mass and ρ {\displaystyle \rho } is the density of the material.
For an ideal gas ,
where, R ¯ {\displaystyle {\bar {R}}} is the specific gas constant , T {\displaystyle T} is the temperature and P {\displaystyle P} is the pressure of the gas.
Specific volume may also refer to molar volume .
The volume of gas increases proportionally to absolute temperature and decreases inversely proportionally to pressure , approximately according to the ideal gas law : V = n R T p {\displaystyle V={\frac {nRT}{p}}} where:
To simplify, a volume of gas may be expressed as the volume it would have in standard conditions for temperature and pressure , which are 0 °C (32 °F) and 100 kPa. [ 2 ]
In contrast to other gas components, water content in air, or humidity , to a higher degree depends on vaporization and condensation from or into water, which, in turn, mainly depends on temperature. Therefore, when applying more pressure to a gas saturated with water, all components will initially decrease in volume approximately according to the ideal gas law. However, some of the water will condense until returning to almost the same humidity as before, giving the resulting total volume deviating from what the ideal gas law predicted. Conversely, decreasing temperature would also make some water condense, again making the final volume deviating from predicted by the ideal gas law.
Therefore, gas volume may alternatively be expressed excluding the humidity content: V d (volume dry). This fraction more accurately follows the ideal gas law. On the contrary, V s (volume saturated) is the volume a gas mixture would have if humidity was added to it until saturation (or 100% relative humidity ).
To compare gas volume between two conditions of different temperature or pressure (1 and 2), assuming nR are the same, the following equation uses humidity exclusion in addition to the ideal gas law:
V 2 = V 1 × T 2 T 1 × p 1 − p w , 1 p 2 − p w , 2 {\displaystyle V_{2}=V_{1}\times {\frac {T_{2}}{T_{1}}}\times {\frac {p_{1}-p_{w,1}}{p_{2}-p_{w,2}}}}
Where, in addition to terms used in the ideal gas law:
For example, calculating how much 1 liter of air (a) at 0 °C, 100 kPa, p w = 0 kPa (known as STPD, see below) would fill when breathed into the lungs where it is mixed with water vapor (l), where it quickly becomes 37 °C (99 °F), 100 kPa, p w = 6.2 kPa (BTPS):
V l = 1 l × 310 K 273 K × 100 k P a − 0 k P a 100 k P a − 6.2 k P a = 1.21 l {\displaystyle V_{l}=1\ \mathrm {l} \times {\frac {310\ \mathrm {K} }{273\ \mathrm {K} }}\times {\frac {100\ \mathrm {kPa} -0\ \mathrm {kPa} }{100\ \mathrm {kPa} -6.2\ \mathrm {kPa} }}=1.21\ \mathrm {l} }
Some common expressions of gas volume with defined or variable temperature, pressure and humidity inclusion are:
The following conversion factors can be used to convert between expressions for volume of a gas: [ 3 ]
The partial volume of a particular gas is a fraction of the total volume occupied by the gas mixture, with unchanged pressure and temperature. In gas mixtures, e.g. air, the partial volume allows focusing on one particular gas component, e.g. oxygen.
It can be approximated both from partial pressure and molar fraction: [ 4 ] V X = V t o t × P X P t o t = V t o t × n X n t o t {\displaystyle V_{\rm {X}}=V_{\rm {tot}}\times {\frac {P_{\rm {X}}}{P_{\rm {tot}}}}=V_{\rm {tot}}\times {\frac {n_{\rm {X}}}{n_{\rm {tot}}}}}
|
https://en.wikipedia.org/wiki/Volume_(thermodynamics)
|
Volume Logic was commercial software which added audio enhancement features to media players . Originally released by Octiv Inc. in 2004, it was the first plug-in for Apple's iTunes for Mac and Windows. In April 2005, the Octiv corporation was acquired by Plantronics . [ 1 ]
Volume Logic was available for RealPlayer , Windows Media Player , Winamp and Musicmatch .
It was designed to subjectively improve the listening experience by increasing loudness of soft passages, controlling loudness of loud passages without audible distortion, emphasizes loudness of bass separately, for example.
It corrected a problem with RealPlayer and the system's wave volume control. [ 2 ] [ 3 ] Volume Logic disabled RealPlayer's volume control and uses its own.
Presets stored settings for the amount of each kind of processing to be applied: automatic gain control, limiting, bass boost, etc. Presets cannot be added.
The Volume Logic plug-in incorporated multi-band dynamics processing technology, solving common audio problems such as speaker distortion and volume shifting. [ citation needed ]
In late 2005, Volume Logic 1.3 was released. This new version was recognized in Softpedia , MacUpdate , and Brothersoft. [ citation needed ] Having compatibility issues with Apple's Mac OS X v10.5 , Plantronics ceased further development with Volume Logic, while leaving Windows users with a v1.4, which is compatible with iTunes 7 .
Leif Claesson, the inventor of audio processing core technology utilized by Octiv and Volume Logic, in 2007 joined with Octiv co-founder Keith Edwards to form a partnership to sell follow-on technology called Breakaway. [ 4 ]
This multimedia software -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Volume_Logic
|
The volume (W) and displacement (Δ) indicators have been discovered by Philippe Samyn in 1997 to help the search for the optimal geometry of architectural structures.
The study is limited to the quest of the geometry giving the structure of minimum volume.
The cost of a structure depends on the nature and the quantity of the materials used as well as the tools and human resources required for its production.
Although technological progress has reduced the cost of tools and the amount of human resources required, and despite the fact that computerised calculation tools can now be used to determine the dimension of a structure so that the load it bears at every point is within the admissible limits allowed by its constituent materials, it is also necessary for its geometry to be optimal. It is far from simple to find this optimal point because the choice available is so vast.
Furthermore, the resistance of the structure is not the only criterion to take into account. In many cases, it is also important to ensure that it will not undergo excessive deformation under static loads or that it does not vibrate to inconvenient or dangerous levels when subjected to dynamic loads.
Volume and displacement indicators, W and Δ, discovered by Philippe Samyn in August 1997, are useful tools in this regard. This approach does not take into account phenomena of elastic instability. It can indeed be shown that it is always possible to design a structure so that this effect becomes negligible.
The objective is to ascertain the optimal morphology for a two-dimensional structure with constant thickness, which:
Each form chosen corresponds to a volume of material V (in m 3 ) and a maximum deformation δ (in m).
Their calculation depends on the factors L, H, E, σ and F. These calculations are long and tedious, they cloud the objective of finding the optimal form.
It is, nevertheless, possible to overcome this problem by setting each factor to unity: while all other characteristics remain the same. Length L is therefore set to 1m, H to H/L, E and σ to 1Pa, and F to 1N.
This "reduced" structure has a volume of material W= σV/ FL (the volume indicator) and a maximum deformation Δ = Eδ / σL (the displacement indicator). Their main characteristic is that they are numbers without physical dimensions (dimensionless) and their value, for every morphology considered, depends only on the ratio L/H, i.e. the geometric slenderness ratio of the form.
This method can easily be applied to three-dimensional structures as illustrated in the following examples.
The theory related to the indicators has been taught since 2000, and among other institutions, at the department of Civil Engineering and of Architecture at the Vrije Universiteit Brussel (VUB ; section "material mechanics and constructions") leading to research and publications under the direction of Prof. Dr. Ir. Philippe Samyn (from 2000 to 2006); Prof. Dr. Ir. Willy Patrick De Wilde (from 2000 to 2011) and now Prof. Dr. Ir. Lincy Pyl.
The "reference book", [ 1 ] since the reference thesis, [ 2 ] reports the developments of the theory at Samyn and Partners as well as the VUB, up to 2004.
The theory is open to everyone who wants to contribute, W and Δ being to be calculated for any resistant structure as defined in paragraph 1 here above.
Progresses in material sciences, robotics and three dimensional printing, lead to the creation of new structural forms lighter than the lightest known today.
The geometry of minimal surfaces of constant thickness in a homogeneous material is, for example, substantially modified when thickness and/or local allowable stress are varying.
The macrostructures considered here may be composed of "structural elements" which material presents a "microstructure".
Whether searching to limit the stress or the deformation, macrostructure, structural element and microstructure have each, a weight Vρ , when ρ is the volumic weight of materials, in N/m 3 , function of the solicitations { F 0 } (for "force" in général) applied to them, of their size { L 0 } (for length or "size" in general), of their shape { G e } (for geometry or "shape" in general), and of their constituting material { M a } (for "material" in general).
It can also be expressed as shape and material ({ G e }{ M a }) defining the weight ( Vρ ) for the structure of a given size under given force ({ F 0 }{ L 0 }).
In material mechanics and for the structural elements under a specific loading case, the factor { G e } corresponds to the "form factor" for elements of continuous section out of a solid material (without voids).
The constituting material might however present a microstructure with voids. This cellular structure enhances than the form factor, whatever the loading case.
The factor { M a } characterizes a material which efficiency might be compared to another for a given loading case, and the independently of the form factor { G e }.
The indicators W = σV / FL and Δ = δE / σL just defined, characterize the macrostructures, while the same notations and symbols in small letters, w = σv / fl and Δ = δE / σl , refer to the structural element.
The figure 1 gives the values of W and Δ for the structural element subject to traction, compression, bending and shear. The left column relates to the limitation of stress and the right column to the limitation of deformation. It shows the direct relation of W to { G e }{ M a } as:
and
or
Then, as W and Δ depend only on L / H {\displaystyle L/H} :
and:
which for a given loading case, is the specific weight of a macrostructure per unit of force and length, depending only from the geometry through L/H , and the materials though σ / ρ .
Wρ / σ includes thus, the material factor { M a } ( ρ / σ and ρ / E for tension and compression without buckling, ρ / E 1/2 for compression limited by buckling, ρ / σ 2/3 and ρ / E 1/2 for pure bending, ρ √ 3/ σ and ρ / G for pure shear) and the form factor { G e }.
All other factor being equal, a cluster of tubes with a diameter H and a wall thickness e , compared to a solid bar of equal volume in a material characterized by ρ , σ , E et G , presents an apparent density ρ a = 4 k (1 − k ) ρ with k = e / H , allowable stress σ a = 4 k (1 − k ) σ ,
The Young's modulus is E a = 4 k ( 1 − k ) E {\displaystyle \ E_{a}=4k(1-k)E} and the shear modulus is G a = 4 k ( 1 − k ) G {\displaystyle \ G_{a}=4k(1-k)G} .
Thus
and
This explains the better performances of lighter materials for structural elements subject to compression or bending.
This indicator allows to compare the efficiency of macrostructures including geometry and material.
It echos the work of M.F. Ashby: "Materials Selection in Mechanical Design" (1992). [ 3 ] He analyses { G e } and { M a } separately as, for his studies, { M a } relates to a large amount of the materials physical properties.
Different and complementary, it can also be placed alongside work carried out since 1969 by the Institut Für Leichte Flächentragwerke in Stuttgart under the direction of Frei Otto and now Werner SobekK, which refers to indices named Tra and Bic . [ 4 ] The Tra is defined as the product of the length of the trajectory of the force F r , (causing the collapse of the structure) onto the supports by the intensity of this force, and the Bic is the relationship of the mass of the structure with Tra .
Since ρ* is the density of the material (in kg/m 3 ), and α is, like W , a constant depending on the type of structure and the loading case:
therefore, with stress σ r {\displaystyle \sigma _{r}} reached under F r {\displaystyle F_{r}}
and as
Unlike W , which is dimensionless, Bic is expressed in kg/Nm . Therefore, depending on the material, an independent comparison of different morphologies is not possible.
It is surprising to note that despite the abundance of their works, none of them mention or make any effort to study W and its relationship with L/H .
It appears that only V. Quintas Ripoll [ 5 ] , [ 6 ] and W. Zalewski and St. Kus [ 7 ] mentioned the volume indicator W without examining it in depth.
In this regard, it is important to note that the existence of anchoring points of an element in traction may reduce the apparent permissible stress to the same level as the reduction necessary to take into account a moderate level of elastic instability.
The influence on W of the buckling of the compressed parts on one side and of the anchoring points at the extremities of an element in traction on the other side is analysed on pages 30 to 58 in the « reference book ».
It follows that initially, only W and Δ should be taken into account for the morphological design of a structure, assuming that it is ultra-dampened (i.e. its internal damping is greater than the critical damping), which makes it impervious to dynamic stress.
The volume V of a structure is therefore directly proportional to the total intensity of the force F which is applied to it, to its length L and to the morphological factor W ; it is inversely proportional to the stress σ to which it can be subjected. Furthermore, the weight of a structure is proportional to the density ρ of the material from which it is constructed. However, its maximum displacement δ remains proportional to the span L and the morphological factor Δ, as well as the ratio between its working stress σ and the modulus of elasticity E .
If it is a case of limiting the weight (or the volume) and the deformation of a structure for a given stress F and span L , with all other aspects remaining unchanged, then the work of the structural engineer involves minimising W and ρ/σ on one side and Δ and σ/E ont the other.
For the large majority of compressed elements, it is possible to limit the reduction of the working stress to 25% by taking into account elastic instability, providing that the designer focuses on ensuring an efficient geometric design from as early as the initial sketches. This means that the increase in their volume indicator can also be limited to 25% . The volume of the elements subject to pure traction is also only very rarely limited to the product of the net distance over which a force is applied by a section strained at the permissible stress. In other words, their real volume indicator is thus also higher than the one which results from the calculation of W . A bar under traction can be welded at its extremities; no extra material apart from the negligible welding material is added, but the rigidity introduce parasitic moments which absorb some of the permissible stress.
The bar can be articulated at its extremities and work at its permissible stress, but this requires close end sockets or attachment mechanisms whose volume is far from negligible, especially if the bar is short or highly stressed. As L.H. Cox demonstrated, [ 9 ] in this case, it is worth taking into account n bars each with a cross-section of Ω/ n , strained by force F/n with 2 n sockets, instead of one bar with a cross-section Ω strained by a force F with 2 sockets, since the total volume of 2 n sockets in the first case is much less than that of 2 sockets in the second.
The anchoring of the extremities of a bar under traction can also be ensured by adherence, as is usually the case for the rebars in elements made of reinforced concrete. In this specific case, it is necessary to have an anchoring length at least 30 times the diameter of the bar. The bar then has a length L + 60 H for a useful length L ; its theoretical volume indicator W = 1 becomes W = 1 + 60 H / L . Consequently, L / H must be greater than 240 (which is always theoretically possible) so that W does not increase by more than 25%. This observation also helps to show another reason for taking into account n bars with a crosssection Ω/ n instead of one bar with a cross-section Ω.
Finally, connections consisting of bolts, dowels, pins or nails, especially in the case of wooden components, significantly reduce usable sections. For elements in traction, a reduction of 25% in the working stress or an increase of 25% in the volume is therefore also necessary in the majority of cases. Determining the volume and the displacement of a structure using the indicators W and Δ is therefore reliable theoretically, providing that:
The volume of the material of the structure, as determined using W , can only be obtained accurately if the theoretical values of the relevant characteristic of the sections under strain σ can be measured in practice.
As shown in Figure 1 above, this characteristic is:
pure bending) ;
It is always possible to obtain the precise value of these characteristics when the parts are made of moulded materials, such as reinforced concrete, or squared-off materials, such as wood or stone. However, this is not the case for laminated or extruded materials, produced on an industrial production line, such as steel or aluminium. It is important therefore to produce these elements with the smallest possible difference in size between two of them in order to avoid an unnecessary use of material. This use is consistent when the related deviation c between two successive values k n and k n +1 is constant, thus ( k n +1 − k n ) / k n = c or k n +1 = ( c + 1) k n or k n +1 = ( c +1) n k 0 .
This is the principle of the geometric series known as the Renard Series (named after Colonel Renard who was the first to use them in calculating the diameter of cabling on aircraft) featured in the French standard NF X01-002. [ 10 ] When all the necessary values are only very slightly greater than a series value, c represents the maximum increase and c /2 the average increase of W . Being universally used, the case of the steel profiles requires an in-depth examination (see the "reference book"; page 26 to 29). Consequently, the use of industrial steel profiles automatically leads to a significant increase of W :
This situation is magnified when the number of profiles available is restricted, which may explain the use of forms which are not theoretically optimal but which tend to subject the available profiles to the permissible stress σ (such as, for example, pylons for high-voltage electric lines or variable height truss bridges). For structures subject to pure bending, this also explains the use of flat plates of variable lengths added to the flanges of these I profiles to obtain the inertia or resisting moment required, with the greatest degree of accuracy. Conversely, the significant variety in the tubes available enables a relative deviation value c which is both smaller and more constant. They also cover a much wider range in both the lower and higher characteristic values. Since their geometric performance is practically identical to that of the I profiles, tubes are the most appropriate industrial solution in order to practically eliminate any increase in the volume indicator W . Nevertheless, practical issues of availability and corrosion may limit their use.
The following figures show the values of the indicators according to the ratio L/H for a number of types of structures.
Figure 2 and 3: W and Δ for a horizontal isostatic span under a uniformly distributed vertical load made up of:
Figure 4: for the transfer to two equidistant supports on the horizontal of a vertical point load (in this case Δ=W) or evenly distributed lead: F = 1.
Figure 5 and 6: W for a vertical mast, with a constant width, subject to a horizontal load which is evenly distributed along its height or concentrated at the top.
Figure 7: W for a membrane of revolution on a vertical axis, with a constant or variable thickness, under an evenly distributed vertical load. It is surprising to note that the minimum value is reached for a conical dome of variable thickness with an opening angle of 90° ( L / H = 2 ; W = 0,5!).
Applications discussed in the « reference book » are:
W can easily be determined in order to optimise structures made up of a number of different construction elements (see « reference book » pages 100–106) as shown, for instance, for the wind turbine in Figure 8.
Or a parabolic roof coupled with large vertical glazed gables subject to wind loads, as seen at Leuven station in Belgium, shown in Figure 9 (see reference [ 11 ] for a detailed analysis).
The optimisation of the King Cross truss for the facade of the Europa building in Brussels (see reference [ 12 ] pages 93–101 for detailed analysis) is another example.
|
https://en.wikipedia.org/wiki/Volume_and_displacement_indicators_for_an_architectural_structure
|
Volume combustion synthesis (VCS) is method of chemical synthesis in which the reactants are heated uniformly in a controlled manner until a reaction ignites throughout the volume of the reaction chamber. [ 1 ] The VCS mode is typically used for weakly exothermic reactions that require preheating prior to ignition. [ 2 ]
This combustion article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Volume_combustion_synthesis
|
In chemistry , concentration is the abundance of a constituent divided by the total volume of a mixture. Several types of mathematical description can be distinguished: mass concentration , molar concentration , number concentration , and volume concentration . [ 1 ] The concentration can refer to any kind of chemical mixture, but most frequently refers to solutes and solvents in solutions . The molar (amount) concentration has variants, such as normal concentration and osmotic concentration . Dilution is reduction of concentration, e.g. by adding solvent to a solution. The verb to concentrate means to increase concentration, the opposite of dilute.
Concentration- , concentratio , action or an act of coming together at a single place, bringing to a common center, was used in post-classical Latin in 1550 or earlier, similar terms attested in Italian (1589), Spanish (1589), English (1606), French (1632). [ 2 ]
Often in informal, non-technical language, concentration is described in a qualitative way, through the use of adjectives such as "dilute" for solutions of relatively low concentration and "concentrated" for solutions of relatively high concentration. To concentrate a solution, one must add more solute (for example, alcohol), or reduce the amount of solvent (for example, water). By contrast, to dilute a solution, one must add more solvent, or reduce the amount of solute. Unless two substances are miscible , there exists a concentration at which no further solute will dissolve in a solution. At this point, the solution is said to be saturated . If additional solute is added to a saturated solution, it will not dissolve, except in certain circumstances, when supersaturation may occur. Instead, phase separation will occur, leading to coexisting phases, either completely separated or mixed as a suspension . The point of saturation depends on many variables, such as ambient temperature and the precise chemical nature of the solvent and solute.
Concentrations are often called levels , reflecting the mental schema of levels on the vertical axis of a graph , which can be high or low (for example, "high serum levels of bilirubin" are concentrations of bilirubin in the blood serum that are greater than normal ).
There are four quantities that describe concentration:
The mass concentration ρ i {\displaystyle \rho _{i}} is defined as the mass of a constituent m i {\displaystyle m_{i}} divided by the volume of the mixture V {\displaystyle V} :
The SI unit is kg/m 3 (equal to g/L).
The molar concentration c i {\displaystyle c_{i}} is defined as the amount of a constituent n i {\displaystyle n_{i}} (in moles) divided by the volume of the mixture V {\displaystyle V} :
The SI unit is mol/m 3 . However, more commonly the unit mol/L (= mol/dm 3 ) is used.
The number concentration C i {\displaystyle C_{i}} is defined as the number of entities of a constituent N i {\displaystyle N_{i}} in a mixture divided by the volume of the mixture V {\displaystyle V} :
The SI unit is 1/m 3 .
The volume concentration σ i {\displaystyle \sigma _{i}} (not to be confused with volume fraction [ 3 ] ) is defined as the volume of a constituent V i {\displaystyle V_{i}} divided by the volume of the mixture V {\displaystyle V} :
Being dimensionless, it is expressed as a number, e.g., 0.18 or 18%.
There seems to be no standard notation in the English literature. The letter σ i {\displaystyle \sigma _{i}} used here is normative in German literature (see Volumenkonzentration ).
Several other quantities can be used to describe the composition of a mixture. These should not be called concentrations. [ 1 ]
Normality is defined as the molar concentration c i {\displaystyle c_{i}} divided by an equivalence factor f e q {\displaystyle f_{\mathrm {eq} }} . Since the definition of the equivalence factor depends on context (which reaction is being studied), the International Union of Pure and Applied Chemistry and National Institute of Standards and Technology discourage the use of normality.
The molality of a solution b i {\displaystyle b_{i}} is defined as the amount of a constituent n i {\displaystyle n_{i}} (in moles) divided by the mass of the solvent m s o l v e n t {\displaystyle m_{\mathrm {solvent} }} ( not the mass of the solution):
The SI unit for molality is mol/kg.
The mole fraction x i {\displaystyle x_{i}} is defined as the amount of a constituent n i {\displaystyle n_{i}} (in moles) divided by the total amount of all constituents in a mixture n t o t {\displaystyle n_{\mathrm {tot} }} :
The SI unit is mol/mol. However, the deprecated parts-per notation is often used to describe small mole fractions.
The mole ratio r i {\displaystyle r_{i}} is defined as the amount of a constituent n i {\displaystyle n_{i}} divided by the total amount of all other constituents in a mixture:
If n i {\displaystyle n_{i}} is much smaller than n t o t {\displaystyle n_{\mathrm {tot} }} , the mole ratio is almost identical to the mole fraction.
The SI unit is mol/mol. However, the deprecated parts-per notation is often used to describe small mole ratios.
The mass fraction w i {\displaystyle w_{i}} is the fraction of one substance with mass m i {\displaystyle m_{i}} to the mass of the total mixture m t o t {\displaystyle m_{\mathrm {tot} }} , defined as:
The SI unit is kg/kg. However, the deprecated parts-per notation is often used to describe small mass fractions.
The mass ratio ζ i {\displaystyle \zeta _{i}} is defined as the mass of a constituent m i {\displaystyle m_{i}} divided by the total mass of all other constituents in a mixture:
If m i {\displaystyle m_{i}} is much smaller than m t o t {\displaystyle m_{\mathrm {tot} }} , the mass ratio is almost identical to the mass fraction.
The SI unit is kg/kg. However, the deprecated parts-per notation is often used to describe small mass ratios.
Concentration depends on the variation of the volume of the solution with temperature, due mainly to thermal expansion .
|
https://en.wikipedia.org/wiki/Volume_concentration
|
Volume contraction is a decrease in the volume of body fluid , including the dissolved substances that maintain osmotic balance ( osmolytes ). The loss of the water component of body fluid is specifically termed dehydration . [ 1 ]
Volume contraction is more or less a loss of extracellular fluid (ECF) and/or intracellular fluid (ICF).
Volume contraction of extracellular fluid is directly coupled to and almost proportional to volume contraction of blood plasma , which is termed hypovolemia . [ 2 ] [ 3 ] Thus, it primarily affects the circulatory system , potentially causing hypovolemic shock .
ECF volume contraction or hypovolemia is usually the type of volume contraction of primary concern in emergency, since ECF is approximately half the volume of ICF and is the first to be affected in e.g. bleeding . [ citation needed ] Volume contraction is sometimes even used synonymously with hypovolemia . [ citation needed ]
Volume contraction of intracellular fluid may occur after substantial fluid loss, since it is much larger than ECF volume, or loss of potassium (K + ) see section below .
ICF volume contraction may cause disturbances in various organs throughout the body.
Na + loss approximately correlates with fluid loss from ECF, since Na + has a much higher concentration in ECF than ICF. In contrast, K + has a much higher concentration in ICF than ECF, and therefore its loss rather correlates with fluid loss from ICF, since K + loss from ECF causes the K + in ICF to diffuse out of the cells, dragging water with it by osmosis .
When the body loses fluids, the amount lost from ICF and ECF, respectively, can be estimated by measuring volume and amount of substance of sodium (Na + ) and potassium (K + ) in the lost fluid, as well as estimating the body composition of the person.
1. To calculate an estimation, the total amount of substance in the body before the loss is first estimated:
n b = O s m b × T B W b {\displaystyle n_{b}=Osm_{b}\times TBW_{b}}
where:
2. The total amount of substance in the body after the loss is then estimated:
n a = n b − n l o s t N a + − n l o s t K + {\displaystyle n_{a}=n_{b}-n_{lostNa^{+}}-n_{lostK^{+}}}
where:
3. The new osmolarity becomes:
O s m a = n a T B W b − V l o s t {\displaystyle Osm_{a}={\frac {n_{a}}{TBW_{b}-V_{lost}}}}
where:
4. This osmolarity is evenly distributed in the body, and is used to estimate the new volumes of ICF and ECF, respectively:
V I C F a = n I C F a O s m a = V I C F b × O s m b − n l o s t K + O s m a {\displaystyle V_{ICFa}={\frac {n_{ICFa}}{Osm_{a}}}={\frac {V_{ICFb}\times Osm_{b}-n_{lostK^{+}}}{Osm_{a}}}}
where:
In homologous manner:
V E C F a = n E C F a O s m a = V E C F b × O s m b − n l o s t N a + O s m a {\displaystyle V_{ECFa}={\frac {n_{ECFa}}{Osm_{a}}}={\frac {V_{ECFb}\times Osm_{b}-n_{lostNa^{+}}}{Osm_{a}}}}
where:
5. The volume of lost fluid from each compartment:
V l o s t I C F = V I C F b − V I C F a {\displaystyle V_{lostICF}=V_{ICFb}-V_{ICFa}}
V l o s t E C F = V E C F b − V E C F a {\displaystyle V_{lostECF}=V_{ECFb}-V_{ECFa}}
where:
|
https://en.wikipedia.org/wiki/Volume_contraction
|
In thermodynamics , the Volume Correction Factor (VCF), also known as Correction for the effect of Temperature on Liquid (CTL), is a standardized computed factor used to correct for the thermal expansion of fluids, primarily, liquid hydrocarbons at various temperatures and densities. [ 1 ] It is typically a number between 0 and 2, rounded to five decimal places which, when multiplied by the observed volume of a liquid, will return a "corrected" value standardized to a base temperature (usually 60 ° Fahrenheit or 15 ° Celsius ).
In general, VCF / CTL values have an inverse relationship with observed temperature relative to the base temperature. That is, observed temperatures above 60 °F (or the base temperature used) typically correlate with a correction factor below "1", while temperatures below 60 °F correlate with a factor above "1". This concept lies in the basis for the kinetic theory of matter and thermal expansion of matter , which states as the temperature of a substance rises, so does the average kinetic energy of its molecules. As such, a rise in kinetic energy requires more space between the particles of a given substance, which leads to its physical expansion. [ 2 ]
Conceptually, this makes sense when applying the VCF to observed volumes. Observed temperatures below the base temperature generate a factor above "1", indicating the corrected volume must increase to account for the contraction of the substance relative to the base temperature. The opposite is true for observed temperatures above the base temperature, generating factors below "1" to account for the expansion of the substance relative to the base temperature.
While the VCF is primarily used for liquid hydrocarbons, the theory and principles behind it apply to most liquids, with some exceptions. As a general principle, most liquid substances will contract in volume as temperature drops. However, certain substances, water for example, contain unique angular structures at the molecular level. As such, when these substances reach temperatures just above their freezing point, they begin to expand, since the angle of the bonds prevent the molecules from tightly fitting together, resulting in more empty space between the molecules in a solid state. [ 3 ] Other substances which exhibit similar properties include silicon, bismuth, antimony and germanium. [ 4 ]
While these are the exceptions to general principles of thermal expansion and contraction, they would seldom, if ever, be used in conjunction with VCF / CTL, as the correction factors are dependent upon specific constants, which are further dependent on liquid hydrocarbon classifications and densities.
The formula for Volume Correction Factor is commonly defined as:
V C F = C T L = exp { − α T Δ T [ 1 + 0.8 α T ( Δ T + δ T ) ] } {\displaystyle VCF=C_{TL}=\exp\{-\alpha _{T}\Delta T[1+0.8\alpha _{T}(\Delta T+\delta _{T})]\}}
Where:
e.g. Most Crude
e.g. Fuel Oils
e.g. Light Gas Oils
e.g. transition zone
e.g. Very Light Gas Oils
e.g. Cylinder lube oil
In standard applications, computing the VCF or CTL requires the observed temperature of the product, and its API gravity at 60 °F. Once calculated, the corrected volume is the product of the VCF and the observed volume.
V C o r r e c t e d = V C F ∗ V O b s e r v e d {\displaystyle V_{Corrected}=VCF*V_{Observed}}
Since API gravity is an inverse measure of a liquid's density relative to that of water , it can be calculated by first dividing the liquid's density by the density of water at a base temperature (usually 60 °F) to compute Specific Gravity (SG) , then converting the Specific Gravity to Degrees API as follows: S G = ρ S u b s t a n c e ρ H 2 O T ⟶ A P I G r a v i t y = 141.5 S G − 131.5 {\displaystyle SG={\frac {\rho _{Substance}}{\rho _{H2O_{T}}}}\longrightarrow API_{Gravity}={\frac {141.5}{SG}}-131.5}
Traditionally, VCF / CTL are found by matching the observed temperature and API gravity within standardized books and tables published by the American Petroleum Institute . These methods are often more time-consuming than entering the values into a VCF calculator; however, due to the variance in methodology and computation of constants, the tables published by the American Petroleum Institute are preferred when dealing with the purchase and sale of crude oil and residual fuels. [ 7 ]
Density of pure water at 60 ° F = 999.016 k g / m 3 {\displaystyle =\ 999.016_{kg/m^{3}}} or 0.999016 g / c m 3 {\displaystyle 0.999016_{g/cm^{3}}} [ 8 ]
Note: There is no universal agreement on the exact density of pure water at various temperatures since each industry will often use a different standard. For example the, USGS says it is 0.99907 g/cm 3 . [ 9 ] While the relative variance between values may be low, it is best to use the agreed upon standard for the industry you are working in,
|
https://en.wikipedia.org/wiki/Volume_correction_factor
|
In mathematics , a volume element provides a means for integrating a function with respect to volume in various coordinate systems such as spherical coordinates and cylindrical coordinates . Thus a volume element is an expression of the form d V = ρ ( u 1 , u 2 , u 3 ) d u 1 d u 2 d u 3 {\displaystyle \mathrm {d} V=\rho (u_{1},u_{2},u_{3})\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}} where the u i {\displaystyle u_{i}} are the coordinates, so that the volume of any set B {\displaystyle B} can be computed by Volume ( B ) = ∫ B ρ ( u 1 , u 2 , u 3 ) d u 1 d u 2 d u 3 . {\displaystyle \operatorname {Volume} (B)=\int _{B}\rho (u_{1},u_{2},u_{3})\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}.} For example, in spherical coordinates d V = u 1 2 sin u 2 d u 1 d u 2 d u 3 {\displaystyle \mathrm {d} V=u_{1}^{2}\sin u_{2}\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}} , and so ρ = u 1 2 sin u 2 {\displaystyle \rho =u_{1}^{2}\sin u_{2}} .
The notion of a volume element is not limited to three dimensions: in two dimensions it is often known as the area element , and in this setting it is useful for doing surface integrals . Under changes of coordinates, the volume element changes by the absolute value of the Jacobian determinant of the coordinate transformation (by the change of variables formula ). This fact allows volume elements to be defined as a kind of measure on a manifold . On an orientable differentiable manifold , a volume element typically arises from a volume form : a top degree differential form . On a non-orientable manifold, the volume element is typically the absolute value of a (locally defined) volume form: it defines a 1-density .
In Euclidean space , the volume element is given by the product of the differentials of the Cartesian coordinates d V = d x d y d z . {\displaystyle \mathrm {d} V=\mathrm {d} x\,\mathrm {d} y\,\mathrm {d} z.} In different coordinate systems of the form x = x ( u 1 , u 2 , u 3 ) {\displaystyle x=x(u_{1},u_{2},u_{3})} , y = y ( u 1 , u 2 , u 3 ) {\displaystyle y=y(u_{1},u_{2},u_{3})} , z = z ( u 1 , u 2 , u 3 ) {\displaystyle z=z(u_{1},u_{2},u_{3})} , the volume element changes by the Jacobian (determinant) of the coordinate change: d V = | ∂ ( x , y , z ) ∂ ( u 1 , u 2 , u 3 ) | d u 1 d u 2 d u 3 . {\displaystyle \mathrm {d} V=\left|{\frac {\partial (x,y,z)}{\partial (u_{1},u_{2},u_{3})}}\right|\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}.} For example, in spherical coordinates (mathematical convention) x = ρ cos θ sin ϕ y = ρ sin θ sin ϕ z = ρ cos ϕ {\displaystyle {\begin{aligned}x&=\rho \cos \theta \sin \phi \\y&=\rho \sin \theta \sin \phi \\z&=\rho \cos \phi \end{aligned}}} the Jacobian determinant is | ∂ ( x , y , z ) ∂ ( ρ , ϕ , θ ) | = ρ 2 sin ϕ {\displaystyle \left|{\frac {\partial (x,y,z)}{\partial (\rho ,\phi ,\theta )}}\right|=\rho ^{2}\sin \phi } so that d V = ρ 2 sin ϕ d ρ d θ d ϕ . {\displaystyle \mathrm {d} V=\rho ^{2}\sin \phi \,\mathrm {d} \rho \,\mathrm {d} \theta \,\mathrm {d} \phi .} This can be seen as a special case of the fact that differential forms transform through a pullback F ∗ {\displaystyle F^{*}} as F ∗ ( u d y 1 ∧ ⋯ ∧ d y n ) = ( u ∘ F ) det ( ∂ F j ∂ x i ) d x 1 ∧ ⋯ ∧ d x n {\displaystyle F^{*}(u\;dy^{1}\wedge \cdots \wedge dy^{n})=(u\circ F)\det \left({\frac {\partial F^{j}}{\partial x^{i}}}\right)\mathrm {d} x^{1}\wedge \cdots \wedge \mathrm {d} x^{n}}
Consider the linear subspace of the n -dimensional Euclidean space R n that is spanned by a collection of linearly independent vectors X 1 , … , X k . {\displaystyle X_{1},\dots ,X_{k}.} To find the volume element of the subspace, it is useful to know the fact from linear algebra that the volume of the parallelepiped spanned by the X i {\displaystyle X_{i}} is the square root of the determinant of the Gramian matrix of the X i {\displaystyle X_{i}} : det ( X i ⋅ X j ) i , j = 1 … k . {\displaystyle {\sqrt {\det(X_{i}\cdot X_{j})_{i,j=1\dots k}}}.}
Any point p in the subspace can be given coordinates ( u 1 , u 2 , … , u k ) {\displaystyle (u_{1},u_{2},\dots ,u_{k})} such that p = u 1 X 1 + ⋯ + u k X k . {\displaystyle p=u_{1}X_{1}+\cdots +u_{k}X_{k}.} At a point p , if we form a small parallelepiped with sides d u i {\displaystyle \mathrm {d} u_{i}} , then the volume of that parallelepiped is the square root of the determinant of the Grammian matrix det ( ( d u i X i ) ⋅ ( d u j X j ) ) i , j = 1 … k = det ( X i ⋅ X j ) i , j = 1 … k d u 1 d u 2 ⋯ d u k . {\displaystyle {\sqrt {\det \left((du_{i}X_{i})\cdot (du_{j}X_{j})\right)_{i,j=1\dots k}}}={\sqrt {\det(X_{i}\cdot X_{j})_{i,j=1\dots k}}}\;\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\cdots \,\mathrm {d} u_{k}.} This therefore defines the volume form in the linear subspace.
On an oriented Riemannian manifold of dimension n , the volume element is a volume form equal to the Hodge dual of the unit constant function, f ( x ) = 1 {\displaystyle f(x)=1} : ω = ⋆ 1. {\displaystyle \omega =\star 1.} Equivalently, the volume element is precisely the Levi-Civita tensor ϵ {\displaystyle \epsilon } . [ 1 ] In coordinates, ω = ϵ = | det g | d x 1 ∧ ⋯ ∧ d x n {\displaystyle \omega =\epsilon ={\sqrt {\left|\det g\right|}}\,\mathrm {d} x^{1}\wedge \cdots \wedge \mathrm {d} x^{n}} where det g {\displaystyle \det g} is the determinant of the metric tensor g written in the coordinate system.
A simple example of a volume element can be explored by considering a two-dimensional surface embedded in n -dimensional Euclidean space . Such a volume element is sometimes called an area element . Consider a subset U ⊂ R 2 {\displaystyle U\subset \mathbb {R} ^{2}} and a mapping function φ : U → R n {\displaystyle \varphi :U\to \mathbb {R} ^{n}} thus defining a surface embedded in R n {\displaystyle \mathbb {R} ^{n}} . In two dimensions, volume is just area, and a volume element gives a way to determine the area of parts of the surface. Thus a volume element is an expression of the form f ( u 1 , u 2 ) d u 1 d u 2 {\displaystyle f(u_{1},u_{2})\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}} that allows one to compute the area of a set B lying on the surface by computing the integral Area ( B ) = ∫ B f ( u 1 , u 2 ) d u 1 d u 2 . {\displaystyle \operatorname {Area} (B)=\int _{B}f(u_{1},u_{2})\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}.}
Here we will find the volume element on the surface that defines area in the usual sense. The Jacobian matrix of the mapping is J i j = ∂ φ i ∂ u j {\displaystyle J_{ij}={\frac {\partial \varphi _{i}}{\partial u_{j}}}} with index i running from 1 to n , and j running from 1 to 2. The Euclidean metric in the n -dimensional space induces a metric g = J T J {\displaystyle g=J^{T}J} on the set U , with matrix elements g i j = ∑ k = 1 n J k i J k j = ∑ k = 1 n ∂ φ k ∂ u i ∂ φ k ∂ u j . {\displaystyle g_{ij}=\sum _{k=1}^{n}J_{ki}J_{kj}=\sum _{k=1}^{n}{\frac {\partial \varphi _{k}}{\partial u_{i}}}{\frac {\partial \varphi _{k}}{\partial u_{j}}}.}
The determinant of the metric is given by det g = | ∂ φ ∂ u 1 ∧ ∂ φ ∂ u 2 | 2 = det ( J T J ) {\displaystyle \det g=\left|{\frac {\partial \varphi }{\partial u_{1}}}\wedge {\frac {\partial \varphi }{\partial u_{2}}}\right|^{2}=\det(J^{T}J)}
For a regular surface, this determinant is non-vanishing; equivalently, the Jacobian matrix has rank 2.
Now consider a change of coordinates on U , given by a diffeomorphism f : U → U , {\displaystyle f\colon U\to U,} so that the coordinates ( u 1 , u 2 ) {\displaystyle (u_{1},u_{2})} are given in terms of ( v 1 , v 2 ) {\displaystyle (v_{1},v_{2})} by ( u 1 , u 2 ) = f ( v 1 , v 2 ) {\displaystyle (u_{1},u_{2})=f(v_{1},v_{2})} . The Jacobian matrix of this transformation is given by F i j = ∂ f i ∂ v j . {\displaystyle F_{ij}={\frac {\partial f_{i}}{\partial v_{j}}}.}
In the new coordinates, we have ∂ φ i ∂ v j = ∑ k = 1 2 ∂ φ i ∂ u k ∂ f k ∂ v j {\displaystyle {\frac {\partial \varphi _{i}}{\partial v_{j}}}=\sum _{k=1}^{2}{\frac {\partial \varphi _{i}}{\partial u_{k}}}{\frac {\partial f_{k}}{\partial v_{j}}}} and so the metric transforms as g ~ = F T g F {\displaystyle {\tilde {g}}=F^{T}gF} where g ~ {\displaystyle {\tilde {g}}} is the pullback metric in the v coordinate system. The determinant is det g ~ = det g ( det F ) 2 . {\displaystyle \det {\tilde {g}}=\det g\left(\det F\right)^{2}.}
Given the above construction, it should now be straightforward to understand how the volume element is invariant under an orientation-preserving change of coordinates.
In two dimensions, the volume is just the area. The area of a subset B ⊂ U {\displaystyle B\subset U} is given by the integral Area ( B ) = ∬ B det g d u 1 d u 2 = ∬ B det g | det F | d v 1 d v 2 = ∬ B det g ~ d v 1 d v 2 . {\displaystyle {\begin{aligned}{\mbox{Area}}(B)&=\iint _{B}{\sqrt {\det g}}\;\mathrm {d} u_{1}\;\mathrm {d} u_{2}\\[1.6ex]&=\iint _{B}{\sqrt {\det g}}\left|\det F\right|\;\mathrm {d} v_{1}\;\mathrm {d} v_{2}\\[1.6ex]&=\iint _{B}{\sqrt {\det {\tilde {g}}}}\;\mathrm {d} v_{1}\;\mathrm {d} v_{2}.\end{aligned}}}
Thus, in either coordinate system, the volume element takes the same expression: the expression of the volume element is invariant under a change of coordinates.
Note that there was nothing particular to two dimensions in the above presentation; the above trivially generalizes to arbitrary dimensions.
For example, consider the sphere with radius r centered at the origin in R 3 . This can be parametrized using spherical coordinates with the map ϕ ( u 1 , u 2 ) = ( r cos u 1 sin u 2 , r sin u 1 sin u 2 , r cos u 2 ) . {\displaystyle \phi (u_{1},u_{2})=(r\cos u_{1}\sin u_{2},r\sin u_{1}\sin u_{2},r\cos u_{2}).} Then g = ( r 2 sin 2 u 2 0 0 r 2 ) , {\displaystyle g={\begin{pmatrix}r^{2}\sin ^{2}u_{2}&0\\0&r^{2}\end{pmatrix}},} and the area element is ω = det g d u 1 d u 2 = r 2 sin u 2 d u 1 d u 2 . {\displaystyle \omega ={\sqrt {\det g}}\;\mathrm {d} u_{1}\mathrm {d} u_{2}=r^{2}\sin u_{2}\,\mathrm {d} u_{1}\mathrm {d} u_{2}.}
|
https://en.wikipedia.org/wiki/Volume_element
|
The volume entropy is an asymptotic invariant of a compact Riemannian manifold that measures the exponential growth rate of the volume of metric balls in its universal cover . This concept is closely related with other notions of entropy found in dynamical systems and plays an important role in differential geometry and geometric group theory . If the manifold is nonpositively curved then its volume entropy coincides with the topological entropy of the geodesic flow . It is of considerable interest in differential geometry to find the Riemannian metric on a given smooth manifold which minimizes the volume entropy, with locally symmetric spaces forming a basic class of examples.
Let ( M , g ) be a compact Riemannian manifold, with universal cover M ~ . {\displaystyle {\tilde {M}}.} Choose a point x ~ 0 ∈ M ~ {\displaystyle {\tilde {x}}_{0}\in {\tilde {M}}} .
The volume entropy (or asymptotic volume growth) h = h ( M , g ) {\displaystyle h=h(M,g)} is defined as the limit
where B ( R ) is the ball of radius R in M ~ {\displaystyle {\tilde {M}}} centered at x ~ 0 {\displaystyle {\tilde {x}}_{0}} and vol is the Riemannian volume in the universal cover with the natural Riemannian metric.
A. Manning proved that the limit exists and does not depend on the choice of the base point. This asymptotic invariant describes the exponential growth rate of the volume of balls in the universal cover as a function of the radius.
Katok's entropy inequality was recently exploited to obtain a tight asymptotic bound for the systolic ratio of surfaces of large genus, see systoles of surfaces .
|
https://en.wikipedia.org/wiki/Volume_entropy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.