text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Veterinary virology is the study of viruses in non-human animals . It is an important branch of veterinary medicine .
Rhabdoviruses are a diverse family of single stranded, negative sense RNA viruses that infect a wide range of hosts, from plants and insects, to fish and mammals. The Rhaboviridae family consists of six genera , two of which, cytorhabdoviruses and nucleorhabdoviruses, only infect plants. Novirhabdoviruses infect fish, and vesiculovirus, lyssavirus and ephemerovirus infect mammals, fish and invertebrates. The family includes pathogens such as rabies virus, vesicular stomatitis virus and potato yellow dwarf virus that are of public health, veterinary, and agricultural significance. [ 1 ]
Foot-and-mouth disease virus (FMDV) is a member of the Aphthovirus genus in the Picornaviridae family and is the cause of foot-and-mouth disease in pigs, cattle, sheep and goats. It is a non-enveloped, positive strand, RNA virus. FMDV is a highly contagious virus. It enters the body through inhalation. [ 2 ]
Pestiviruses have a single stranded, positive-sense RNA genomes. They cause Classical swine fever (CSF) and Bovine viral diarrhea (BVD). Mucosal disease is a distinct, chronic persistent infection, whereas BVD is an acute infection. [ 3 ]
Arteriviruses are small, enveloped, animal viruses with an icosahedral core containing a positive-sense RNA genome. The family includes equine arteritis virus (EAV), porcine reproductive and respiratory syndrome virus (PRRSV), lactate dehydrogenase elevating virus (LDV) of mice and simian haemorrhagic fever virus (SHFV). [ 4 ]
Coronaviruses are enveloped viruses with a positive-sense RNA genome and with a nucleocapsid of helical symmetry. They infect the upper respiratory and gastrointestinal tract of mammals and birds. They are the cause of a wide range of diseases in cats, dog, pigs, rodents, cattle and humans. Transmission is by the faecal-oral route. [ 5 ]
Torovirus is a genus of viruses within the family Coronaviridae , subfamily Torovirinae that primarily infect vertebrates and include Berne virus of horses and Breda virus of cattle. They cause gastroenteritis in mammals, including humans but rarely. [ 6 ]
Influenza is caused by RNA viruses of the family Orthomyxoviridae and affects birds and mammals.
Wild aquatic birds are the natural hosts for a large variety of influenza A viruses. Occasionally viruses are transmitted from this reservoir to other species and may then cause devastating outbreaks in domestic poultry or give rise to human influenza pandemics . [ 7 ]
Bluetongue virus (BTV), a member of Orbivirus genus within the Reoviridae family causes serious disease in livestock (sheep, goat, cattle). It is non-enveloped, double-stranded RNA virus. The genome is segmented. [ 8 ] [ 9 ]
Circoviruses are small single-stranded DNA viruses. There are two genera: gyrovirus, with one species called chicken anemia virus; and circovirus, which includes porcine circovirus types 1 and 2, psittacine beak and feather disease virus, pigeon circovirus, canary circovirus, goose circovirus. [ 10 ]
Herpesviruses are ubiquitous pathogens infecting a variety of animals, including humans. Hosts include many economically important species such as abalone, oysters, salmon, poultry ( avian infectious laryngotracheitis , Marek's disease ), cattle ( bovine malignant catarrhal fever ), dogs, goats, horses, cats ( feline viral rhinotracheitis ), and pigs ( pseudorabies ). [ 11 ] Infections may be severe and may result in fatalities or reduced productivity. Therefore, outbreaks of herpesviruses in livestock cause significant financial losses and are an important area of study in veterinary virology.
African swine fever virus (ASFV) is a large double-stranded DNA virus which replicates in the cytoplasm of infected cells and is the only member of the Asfarviridae family. The virus causes a lethal haemorraghic disease in domestic pigs. Some strains can cause death of animals within as little as a week after infection. In other species, the virus causes no obvious disease. ASFV is endemic to sub-Saharan Africa and exists in the wild through a cycle of infection between ticks and wild pigs, bushpigs and warthogs. [ 12 ]
Retroviruses are established pathogens of veterinary importance. They are generally a cause of cancer or immune deficiency. [ 13 ]
Flaviviruses constitute a family of linear, single-stranded RNA(+) viruses. Flaviviruses include the West Nile virus , dengue virus , Tick-borne Encephalitis Virus, Yellow Fever Virus, and several other viruses. Many flavivirus species can replicate in both mammalian and insect cells. Most flaviviruses are arthropod borne and multiply in both vertebrates and arthropods. The viruses in this family that are of veterinary importance include Japanese encephalitis virus , St. Louis encephalitis virus , West Nile virus , Israel turkey meningoencephalomyelitis virus , Sitiawan virus , Wesselsbron virus , yellow fever virus and the tick-borne flaviviruses e.g. louping ill virus . [ 14 ]
Paramyxoviruses are a diverse family of non-segmented negative strand RNA viruses that include many highly pathogenic viruses affecting humans, animals, and birds. These include canine distemper virus ( dogs ), phocine distemper virus ( seals ), cetacean morbillivirus ( dolphins and porpoises ) Newcastle disease virus ( birds ) and rinderpest virus ( cattle ). Some paramyxoviruses such as the henipaviruses are zoonotic pathogens, occurring primarily in an animal hosts, but also able to infect humans. [ 15 ]
Parvoviruses are linear, non-segmented single-stranded DNA viruses , with an average genome size of 5000 nucleotides. They are classified as group II viruses in Baltimore classification of viruses. Parvoviruses are among the smallest viruses (hence the name, from Latin parvus meaning small ) and are 18–28 nm in diameter. [ 16 ]
Parvoviruses can cause disease in some animals , including starfish and humans. Because the viruses require actively dividing cells to replicate, the type of tissue infected varies with the age of the animal. The gastrointestinal tract and lymphatic system can be affected at any age, leading to vomiting, diarrhea and immunosuppression but cerebellar hypoplasia is only seen in cats that were infected in the womb or at less than two weeks of age, and disease of the myocardium is seen in puppies infected between the ages of three and eight weeks. [ 17 ]
|
https://en.wikipedia.org/wiki/Veterinary_virology
|
VETIGEL is a veterinary product, a plant-derived injectable gel that is claimed to quickly stop traumatic bleeding on external and internal wounds. Its name is coined from Medi-Gel, from the video game series Mass Effect . [ citation needed ] It uses a plant-based haemophilic polymer made from polysaccharides that forms a mesh which seals the wound . [ 1 ] It is manufactured by Cresilon, Inc., an American biotechnology company, which is also exploring human products derived from its technology, slated to launch as early as 2016. [ 2 ] The company plans on releasing a product for the military and the emergency medicine market first, followed by a product for the human surgical market when FDA approval is granted. [ 3 ]
Cresilon, Inc. (Formerly Suneris, Inc.) is headquartered in Brooklyn , New York City , United States. The company was founded in 2010 by Joe Landolina and Isaac Miller, while they were students at NYU Poly . [ 4 ] [ 5 ] Cresilon focuses on wound care products, specifically those in the field of hemostasis . The company operates out of a 25,000 sq. ft. manufacturing facility located in Sunset Park, Brooklyn, NY . [ 6 ] [ 7 ]
This veterinary medicine –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vetigel
|
Veveo is a software company based in Andover, Massachusetts .
Established in 2004, Veveo has collaborated with various firms including Comcast , Cablevision , Rogers , AT&T , DirecTV , and Nokia . The venture-backed company is headquartered in Andover, Massachusetts , and possesses an intellectual property portfolio consisting of fifty issued or allowed patents along with 80 patent applications. [ 1 ]
On February 25, 2014, Veveo was acquired by Rovi Corporation . [ 2 ] [ 3 ]
Veveo's developments in natural language processing and understanding-enabled dialog-based real-time conversational interfaces were introduced in late 2012. [ 4 ] These interfaces allowed connected devices and applications to be used with conversational intelligence applied to voice interfaces. This allowed users to talk to devices with normal language and have devices respond with natural language responses. [ 5 ]
|
https://en.wikipedia.org/wiki/Veveo
|
V flo is a commercially available, physics -based distributed hydrologic model generated by Vieux & Associates, Inc. V flo uses radar rainfall data for hydrologic input to simulate distributed runoff . [ 1 ] [ 2 ] V flo employs GIS maps for parameterization via a desktop interface. [ 3 ] The model is suited for distributed hydrologic forecasting in post-analysis and in continuous operations. V flo output is in the form of hydrographs at selected drainage network grids, as well as distributed runoff maps covering the watershed. Model applications include civil infrastructure operations and maintenance, stormwater prediction and emergency management , continuous and short-term surface water runoff , recharge estimation, soil moisture monitoring, land use planning, water quality monitoring, and water resources management.
V flo considers the spatial character of the parameters and precipitation controlling hydrologic processes, and thus improves upon lumped representations previously used in hydrologic modeling . Historical practice has been to use lumped representations because of computational limitations or because sufficient data was not available to populate a distributed model database. [ 4 ] Advances in computational speed; development of high-resolution precipitation data from radar and satellites; and availability of worldwide digital data sets and GIS technology makes distributed, physics-based modeling possible. [ 5 ] V flo is designed to take advantage of the spatial variability of high resolution radar rainfall input, GIS datasets, and hydraulic channel characteristics. Because it is physics -based, it produces hydrographs based on conservation equations and the hydraulics of the drainage network , and can be employed in locations where there are no rain gauges or previous modeling studies. In addition, V flo ’s network approach makes models scalable from upland watersheds to river basins using the same drainage network .
V flo is suited for distributed hydrologic forecasting in post-analysis and continuous operations. V flo models may be calibrated by loading precipitation maps for historical events and comparing simulated volume/peak hydrographs to observed hydrographs. Elevation data are taken from a digital elevation model . A vector channel representation is employed. Parameterization utilizes digital data sets at any resolution, including LIDAR terrain data and other digital maps of impervious area , soils, and land use/cover. V flo is developed to utilize multi-sensor inputs from radar , satellites, rain gauges , or model forecasts. The kinematic wave analogy is used to represent hydraulic conditions in a watershed .
|
https://en.wikipedia.org/wiki/Vflo
|
The Vg1 ribozyme is a manganese dependent RNA enzyme or ribozyme which is the smallest ribozyme to be identified. It was identified in the 3′ UTR of Xenopus Vg1 mRNA transcripts and mouse beta-actin mRNA. [ 1 ] This ribozyme was identified from in vitro studies that showed that the Vg1 mRNA was cleaved within the 3′ UTR in the absence of protein. Studying the Vg1 mRNA 3′UTR a manganese -dependent ribozyme was predicted to exist. This ribozyme was shown to be located adjacent to the polyadenylation site and in vitro studies showed that it catalyzes a first-order reaction where its mechanism of cleavage is similar to the manganese ribozyme present in Tetrahymena group I introns . [ 2 ]
In vivo studies showed that this ribozyme is not functional with the cell. [ 1 ]
This catalysis article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vg1_ribozyme
|
ViaGen Pets , based in Cedar Park, Texas, is a division of TransOva Genetics, that offers animal cloning services to pet owners. ViaGen Pets division was launched in 2016. [ 1 ]
ViaGen Pets offers cloning as well as DNA preservation services, sometimes called tissue or cell banking. [ 2 ]
ViaGen's subsidiary, Start Licensing, owns a cloning patent which is licensed to their only competitor as of 2018, who also offers animal cloning services. [ 3 ]
The cloning process used by both ViaGen and their competitor is somatic cell nuclear transfer, the same as which was used for cloning Dolly the Sheep . [ 3 ]
ViaGen Pets began by offering cloning to the livestock and equine industry in 2003, [ 1 ] and later included cloning of cats and dogs in 2016. [ 4 ]
This United States corporation or company article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/ViaGen_Pets
|
Viability PCR , also named v-PCR or vPCR, is an evolution of PCR . Through the use of a simple pre-treatment of the sample by the means of specific intercalating photo-reactive reagents it's possible to neutralize the DNA of dead cells. As a result, only DNA from live cells will be detected by PCR. This approach expands a lot the analytical scope of PCR procedures. The capability to detect only living cells become very important, because in key applications is more important to know the amount of live cells, than the total cell level. Examples of this are: food and water quality control, infectious diseases diagnostic, veterinary applications, ecological dynamics...
The first referenced work about this analytical approach was in 2003, Norwegian researchers [ 1 ] suggest the use of Ethidium Monoazide, an azide form of Ethidium Bromide , which was used in other analytical fields as Flow Cytometry as a candidate for viability PCR. However, the main important advances were done by Nocker and colleagues, which demonstrated in successive works [ 2 ] [ 3 ] [ 4 ] [ 5 ] the potential of this technology and also suggested Propidium monoazide as a better reagent for vPCR. [ 6 ]
This field still is in development, from 2003 up to 2015, the scientific evidences about the applicability of vPCR are stacking, nowadays main efforts are focused in procedure optimization. Since a simple reagent mix with the sample, photo-activation and subsequent PCR not always shows expected results, each procedure needs some optimization. Up to now the main improvements has been :
- Improving the efficiency of photo activation: early procedures were based on high power halogen lamps which overheated the samples and don't ensured constant light dose, these home made solutions have been replaced by led based instruments. [1] [2]
- The use of long PCR amplicons as targets. [ 7 ]
- The increase of temperature during dark incubation. [ 8 ]
Through combining different optimizations strategies [ 9 ] and controlling the analytical bias, nowadays the vPCR becomes a powerful analytical tool.
|
https://en.wikipedia.org/wiki/Viability_PCR
|
A viability assay is an assay that is created to determine the ability of organs , cells or tissues to maintain or recover a state of survival. [ 2 ] Viability can be distinguished from the all-or-nothing states of life and death by the use of a quantifiable index that ranges between the integers of 0 and 1 or, if more easily understood, the range of 0% and 100%. [ 3 ] Viability can be observed through the physical properties of cells, tissues, and organs. Some of these include mechanical activity, motility, such as with spermatozoa and granulocytes , the contraction of muscle tissue or cells, mitotic activity in cellular functions, and more. [ 3 ] Viability assays provide a more precise basis for measurement of an organism's level of vitality.
Viability assays can lead to more findings than the difference of living versus nonliving. These techniques can be used to assess the success of cell culture techniques, cryopreservation techniques, the toxicity of substances, or the effectiveness of substances in mitigating effects of toxic substances. [ 4 ]
Though simple visual techniques of observing viability can be useful, it can be difficult to thoroughly measure an organism's/part of an organism's viability merely using the observation of physical properties. However, there are a variety of common protocols utilized for further observation of viability using assays.
As with many kinds of viability assays, quantitative measures of physiological function do not indicate whether damage repair and recovery is possible. [ 12 ] An assay of the ability of a cell line to adhere and divide may be more indicative of incipient damage than membrane integrity. [ 13 ]
"Frogging" is a type of viability assay method that utilizes an agar plate for its environment and consists of plating serial dilutions by pinning them after they have been diluted in liquid. Some of its limitations include that it does not account for total viability and it is not particularly sensitive to low-viability assays; however, it is known for its quick pace. [ 1 ] "Tadpoling", which is a method practiced after the development of "frogging", is similar to the "frogging" method, but its test cells are diluted in liquid and then kept in liquid through the examination process. The "tadpoling" method can be used to measure culture viability accurately, which is what depicts its main separation from "frogging". [ 1 ]
|
https://en.wikipedia.org/wiki/Viability_assay
|
Viability theory is an area of mathematics that studies the evolution of dynamical systems under constraints on the system state . [ 1 ] [ 2 ] It was developed to formalize problems arising in the study of various natural and social phenomena, and has close ties to the theories of optimal control and set-valued analysis .
Many systems, organizations, and networks arising in biology and the social sciences do not evolve in a deterministic way, nor even in a stochastic way. Rather they evolve with a Darwinian flavor, driven by random fluctuations but yet constrained to remain "viable" by their environment. Viability theory started in 1976 by translating mathematically the title of the book Chance and Necessity [ 3 ] by Jacques Monod to the differential inclusion x ′ ( t ) ∈ F ( x ( t ) ) {\displaystyle x'(t)\in F(x(t))} for chance and x ( t ) ∈ K {\displaystyle x(t)\in K} for necessity. The differential inclusion is a type of “evolutionary engine” (called an evolutionary system associating with any initial state x a subset of evolutions starting at x. The system is said to be deterministic if this set is made of one and only one evolution and contingent otherwise.
Necessity is the requirement that at each instant, the evolution is viable (remains) in the environment K described by viability constraints , a word encompassing polysemous concepts as stability, confinement, homeostasis, adaptation , etc., expressing the idea that some variables must obey some constraints (representing physical, social, biological and economic constraints, etc.) that can never be violated. So, viability theory starts as the confrontation of evolutionary systems governing evolutions and viability constraints that such evolutions must obey. They share common features:
Viability theory thus designs and develops mathematical and algorithmic methods for investigating the "adaptation to viability constraints" of evolutions governed by complex systems under uncertainty that are found in many domains involving living beings, from biological evolution to economics, from environmental sciences to financial markets, from control theory and robotics to cognitive sciences. It needed to forge a differential calculus of set-valued maps (set-valued analysis), differential inclusions and differential calculus in metric spaces (mutational analysis).
The basic problem of viability theory is to find the "viability kernel" of an environment, the subset of initial states in the environment such that there exists at least one evolution "viable" in the environment, in the sense that at each time, the state of the evolution remains confined to the environment. The second question is then to provide the regulation map selecting such viable evolutions starting from the viability kernel. The viability kernel may be equal to the environment, in which case the environment is called viable under the evolutionary system, and the empty set, in which case it is called a repellor, because all evolutions eventually violate the constraints.
The viability kernel assumes that some kind of "decision maker" controls or regulates evolutions of the system. If not, the next problem looks at the "tychastic kernel" (from tyche, meaning chance in Greek) or "invariance kernel", the subset of initial states in the environment such that all evolutions are "viable" in the environment, an alternative way to stochastic differential equations encapsulating the concept of "insurance" against uncertainty, providing a way of eradicating it instead of evaluating it.
|
https://en.wikipedia.org/wiki/Viability_theory
|
Viable but nonculturable ( VBNC ) bacteria refers as to bacteria that are in a state of very low metabolic activity and do not divide, but are alive and have the ability to become culturable once resuscitated. [ 1 ]
Bacteria in a VBNC state cannot grow on standard growth media , though flow cytometry can measure the viability of the bacteria. [ 1 ] Bacteria can enter the VBNC state as a response to stress , due to adverse nutrient , temperature , osmotic , oxygen , and light conditions. [ 1 ] The cells that are in the VBNC state are morphologically smaller, and demonstrate reduced nutrient transport, rate of respiration, and synthesis of macromolecules. [ 1 ] Sometimes, VBNC bacteria can remain in that state for over a year. [ 1 ] It has been shown that numerous pathogens and non-pathogens can enter the VBNC state, which therefore has significant implications in pathogenesis , bioremediation , and other branches of microbiology . [ 1 ]
The existence of the VBNC state is controversial. The validity and interpretation of the assays to determine the VBNC state have been questioned. [ 2 ]
Species known to enter a VBNC state: [ 3 ]
This molecular or cell biology article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Viable_but_nonculturable
|
Viaspan was the trademark under which the University of Wisconsin cold storage solution (also known as University of Wisconsin solution or UW solution ) was sold. Currently, UW solution is sold under the Belzer UW trademark and others like Bel-Gen or StoreProtect. UW solution was the first solution designed for use in organ transplantation , and became the first intracellular -like preservation medium. Developed in the late 1980s by Folkert Belzer and James Southard for pancreas preservation, the solution soon displaced EuroCollins solution as the preferred medium for cold storage of livers and kidneys , as well as pancreas. The solution has also been used for hearts and other organs . University of Wisconsin cold storage solution remains what is often called the gold standard for organ preservation, [ 1 ] despite the development of other solutions that are in some respects superior. [ 2 ]
The guiding principles for the development of UW Solution were: [ citation needed ]
|
https://en.wikipedia.org/wiki/Viaspan
|
A vibrating-sample magnetometer (VSM) (also referred to as a Foner magnetometer/oscillation magnetometer) is a scientific instrument that measures magnetic properties based on Faraday’s Law of Induction. Simon Foner at MIT Lincoln Laboratory invented VSM in 1955 and reported it in 1959. [ 1 ] Also it was mentioned by G.W. Van Oosterhout [ 2 ] and by P.J Flanders in 1956. [ 3 ] A sample is first placed in a constant magnetic field and if the sample is magnetic it will align its magnetization with the external field. The magnetic dipole moment of the sample creates a magnetic field that changes as a function of time as the sample is moved up and down. This is typically done through the use of a piezoelectric material. The alternating magnetic field induces an electric field in the pickup coils of the VSM. [ 4 ] The current is proportional to the magnetization of the sample - the greater the induced current, the greater the magnetization. As a result, typically a hysteresis curve will be recorded [ 5 ] and from there the magnetic properties of the sample can be deduced.
The idea of vibrating sample came from D. O. Smith's [ 6 ] vibrating-coil magnetometer .
These allow the VSM to maximize the induced signal, reduce the noise, give a wide saddle point, minimize the volume in between the sample and electromagnet to achieve a more uniform magnetic field at the sample space. [ 5 ] The configuration of the coils can vary depending on the type of material being studied. [ 5 ]
The VSM relies on Faraday's law of induction , with the detection of the emf given by ε = N d d t ( B A c o s ϑ ) {\displaystyle \varepsilon =N{d \over dt}(BAcos\vartheta )} , [ 7 ] where N is the number of wire turns, A is the area, and ϑ {\displaystyle \vartheta } the angle between the normal of the coil and the B field. However, N and A are often unnecessary if the VSM is properly calibrated. [ 7 ] By varying the strength of the electromagnet through computer software, the external field is sweeped from high to low and back to high. [ 7 ] Typically this is automated through a computer process and a cycle of data is printed out. The electromagnet is typically attached to a rotating base [ 7 ] so as to allow the measurements be taken as a function of angle. The external field is applied parallel to the sample length [ 7 ] and the aforementioned cycle prints out a hysteresis loop . Then using known magnetization of the calibration material and wire volume the high field voltage signal can be converted into emu units - useful for analysis. [ 7 ]
The precision and accuracy of VSM's are quite high even among other magnetometers and can be on the order of ~ 10 − 6 {\displaystyle \displaystyle 10^{-6}} emu. [ 5 ] VSM's further allow for a sample to be tested at varying angles with respect to its magnetization letting researchers minimize the effects of external influences. [ 8 ] However, VSM's are not well suited for determining the magnetization loop due to the demagnetizing effects incurred by the sample. [ 8 ] VSM's further suffer from temperature dependence and cannot be used on fragile samples that cannot undergo acceleration (from the vibration). [ 5 ] [ 7 ] [ 8 ]
|
https://en.wikipedia.org/wiki/Vibrating-sample_magnetometer
|
A vibrating feeder or vibratory feeder is an industrial machine used to feed material to a process or machine . Vibratory feeders use both vibration and gravity to move material. They are mainly used to transport a large number of smaller objects.
Gravity determines the direction of movement; either downwards, or down and towards one side. Vibration is used to move the material. [ 1 ] A beltweigher is typically used to measure the material flow rate. A weigh feeder can measure and regulate the flow rate by varying the belt conveyor speed. Vibratory bowl feeders [ clarification needed ] are also used for automatic feeding of small to large and differently shaped industrial parts.
Vibratory feeders are used for automating high-speed production lines and assembly systems in various industrial sectors, including:
This technology-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vibrating_feeder
|
A vibrating wire sensor measures the opening of a joint from the stretch of a wire being made to vibrate at acoustical frequency. Since the wire is made of an elastic metal, this type of sensor can be used to measure pulling forces within a certain range.
The applied external force changes the tension on the wire, which changes the frequency . The frequency is measured, and indicates the amount of force on the sensor . The load sensor has an integrated electronic system to activate the vibrating wire and measure the frequency. This can be compared to a guitar: plucking a string creates vibration and sound. The pitch of the sound is dependent on the tension on the string.
|
https://en.wikipedia.org/wiki/Vibrating_wire
|
Vibration (from Latin vibrāre ' to shake ' ) is a mechanical phenomenon whereby oscillations occur about an equilibrium point . Vibration may be deterministic if the oscillations can be characterised precisely (e.g. the periodic motion of a pendulum ), or random if the oscillations can only be analysed statistically (e.g. the movement of a tire on a gravel road).
Vibration can be desirable: for example, the motion of a tuning fork , the reed in a woodwind instrument or harmonica , a mobile phone , or the cone of a loudspeaker .
In many cases, however, vibration is undesirable, wasting energy and creating unwanted sound . For example, the vibrational motions of engines , electric motors , or any mechanical device in operation are typically unwanted. Such vibrations could be caused by imbalances in the rotating parts, uneven friction , or the meshing of gear teeth. Careful designs usually minimize unwanted vibrations.
The studies of sound and vibration are closely related (both fall under acoustics ). Sound, or pressure waves , are generated by vibrating structures (e.g. vocal cords ); these pressure waves can also induce the vibration of structures (e.g. ear drum ). Hence, attempts to reduce noise are often related to issues of vibration. [ 1 ]
Machining vibrations are common in the process of subtractive manufacturing .
Free vibration or natural vibration occurs when a mechanical system is set in motion with an initial input and allowed to vibrate freely. Examples of this type of vibration are pulling a child back on a swing and letting it go, or hitting a tuning fork and letting it ring. The mechanical system vibrates at one or more of its natural frequencies and damps down to motionlessness.
Forced vibration is when a time-varying disturbance (load, displacement, velocity, or acceleration) is applied to a mechanical system. The disturbance can be a periodic and steady-state input, a transient input, or a random input. The periodic input can be a harmonic or a non-harmonic disturbance. Examples of these types of vibration include a washing machine shaking due to an imbalance, transportation vibration caused by an engine or uneven road, or the vibration of a building during an earthquake. For linear systems, the frequency of the steady-state vibration response resulting from the application of a periodic, harmonic input is equal to the frequency of the applied force or motion, with the response magnitude being dependent on the actual mechanical system.
Damped vibration: When the energy of a vibrating system is gradually dissipated by friction and other resistances, the vibrations are said to be damped. The vibrations gradually reduce or change in frequency or intensity or cease and the system rests in its equilibrium position. An example of this type of vibration is the vehicular suspension dampened by the shock absorber .
Vibration testing is accomplished by introducing a forcing function into a structure, usually with some type of shaker. Alternately, a DUT (device under test) is attached to the "table" of a shaker. Vibration testing is performed to examine the response of a device under test (DUT) to a defined vibration environment. The measured response may be ability to function in the vibration environment, fatigue life, resonant frequencies or squeak and rattle sound output ( NVH ). Squeak and rattle testing is performed with a special type of quiet shaker that produces very low sound levels while under operation.
For relatively low frequency forcing (typically less than 100 Hz), servohydraulic (electrohydraulic) shakers are used. For higher frequencies (typically 5 Hz to 2000 Hz), electrodynamic shakers are used. Generally, one or more "input" or "control" points located on the DUT-side of a vibration fixture is kept at a specified acceleration. [ 1 ] Other "response" points may experience higher vibration levels (resonance) or lower vibration level (anti-resonance or damping) than the control point(s). It is often desirable to achieve anti-resonance to keep a system from becoming too noisy, or to reduce strain on certain parts due to vibration modes caused by specific vibration frequencies. [ 3 ]
The most common types of vibration testing services conducted by vibration test labs are sinusoidal and random. Sine (one-frequency-at-a-time) tests are performed to survey the structural response of the device under test (DUT). During the early history of vibration testing, vibration machine controllers were limited only to controlling sine motion so only sine testing was performed. Later, more sophisticated analog and then digital controllers were able to provide random control (all frequencies at once). A random (all frequencies at once) test is generally considered to more closely replicate a real world environment, such as road inputs to a moving automobile.
Most vibration testing is conducted in a 'single DUT axis' at a time, even though most real-world vibration occurs in various axes simultaneously. MIL-STD-810G, released in late 2008, Test Method 527, calls for multiple exciter testing. The vibration test fixture [ 4 ] used to attach the DUT to the shaker table must be designed for the frequency range of the vibration test spectrum. It is difficult to design a vibration test fixture which duplicates the dynamic response (mechanical impedance) [ 5 ] of the actual in-use mounting. For this reason, to ensure repeatability between vibration tests, vibration fixtures are designed to be resonance free [ 5 ] within the test frequency range.
Generally for smaller fixtures and lower frequency ranges, the designer can target a fixture design that is free of resonances in the test frequency range. This becomes more difficult as the DUT gets larger and as the test frequency increases. In these cases multi-point control strategies [ 6 ] can mitigate some of the resonances that may be present in the future.
Some vibration test methods limit the amount of crosstalk (movement of a response point in a mutually perpendicular direction to the axis under test) permitted to be exhibited by the vibration test fixture.
Devices specifically designed to trace or record vibrations are called vibroscopes .
Vibration analysis (VA), applied in an industrial or maintenance environment aims to reduce maintenance costs and equipment downtime by detecting equipment faults. [ 7 ] [ 8 ] VA is a key component of a condition monitoring (CM) program, and is often referred to as predictive maintenance (PdM). [ 9 ] Most commonly VA is used to detect faults in rotating equipment (Fans, Motors, Pumps, and Gearboxes etc.) such as imbalance, misalignment, rolling element bearing faults and resonance conditions. [ 10 ]
VA can use the units of Displacement, Velocity and Acceleration displayed as a time waveform (TWF), but most commonly the spectrum is used, derived from a fast Fourier transform of the TWF. The vibration spectrum provides important frequency information that can pinpoint the faulty component.
The fundamentals of vibration analysis can be understood by studying the simple Mass-spring-damper model. Indeed, even a complex structure such as an automobile body can be modeled as a "summation" of simple mass–spring–damper models. The mass–spring–damper model is an example of a simple harmonic oscillator . The mathematics used to describe its behavior is identical to other simple harmonic oscillators such as the RLC circuit .
Note: This article does not include the step-by-step mathematical derivations, but focuses on major vibration analysis equations and concepts. Please refer to the references at the end of the article for detailed derivations.
To start the investigation of the mass–spring–damper assume the damping is negligible and that there is no external force applied to the mass (i.e. free vibration). The force applied to the mass by the spring is proportional to the amount the spring is stretched "x" (assuming the spring is already compressed due to the weight of the mass). The proportionality constant, k, is the stiffness of the spring and has units of force/distance (e.g. lbf/in or N/m). The negative sign indicates that the force is always opposing the motion of the mass attached to it:
The force generated by the mass is proportional to the acceleration of the mass as given by Newton's second law of motion :
The sum of the forces on the mass then generates this ordinary differential equation : m x ¨ + k x = 0. {\displaystyle \ m{\ddot {x}}+kx=0.}
Assuming that the initiation of vibration begins by stretching the spring by the distance of A and releasing, the solution to the above equation that describes the motion of mass is:
This solution says that it will oscillate with simple harmonic motion that has an amplitude of A and a frequency of f n . The number f n is called the undamped natural frequency . For the simple mass–spring system, f n is defined as:
Note: angular frequency ω (ω=2 π f ) with the units of radians per second is often used in equations because it simplifies the equations, but is normally converted to ordinary frequency (units of Hz or equivalently cycles per second) when stating the frequency of a system. If the mass and stiffness of the system is known, the formula above can determine the frequency at which the system vibrates once set in motion by an initial disturbance. Every vibrating system has one or more natural frequencies that it vibrates at once disturbed. This simple relation can be used to understand in general what happens to a more complex system once we add mass or stiffness. For example, the above formula explains why, when a car or truck is fully loaded, the suspension feels "softer" than unloaded—the mass has increased, reducing the natural frequency of the system.
Vibrational motion could be understood in terms of conservation of energy . In the above example the spring has been extended by a value of x and therefore some potential energy ( 1 2 k x 2 {\displaystyle {\tfrac {1}{2}}kx^{2}} ) is stored in the spring. Once released, the spring tends to return to its un-stretched state (which is the minimum potential energy state) and in the process accelerates the mass. At the point where the spring has reached its un-stretched state all the potential energy that we supplied by stretching it has been transformed into kinetic energy ( 1 2 m v 2 {\displaystyle {\tfrac {1}{2}}mv^{2}} ). The mass then begins to decelerate because it is now compressing the spring and in the process transferring the kinetic energy back to its potential. Thus oscillation of the spring amounts to the transferring back and forth of the kinetic energy into potential energy. In this simple model the mass continues to oscillate forever at the same magnitude—but in a real system, damping always dissipates the energy, eventually bringing the spring to rest.
When a dashpot is added, this models sources of damping by generating a force that is proportional to the velocity of the mass. The proportionality constant c is called the damping coefficient and has units of Force over velocity (lbf⋅s/in or N⋅s/m).
Summing the forces on the mass results in the following ordinary differential equation:
The solution to this equation depends on the amount of damping. If the damping is small enough, the system still vibrates—but eventually, over time, stops vibrating. This case is called underdamping, which is important in vibration analysis. If damping is increased just to the point where the system no longer oscillates, the system has reached the point of critical damping. If the damping is increased past critical damping, the system is overdamped. The value that the damping coefficient must reach for critical damping in the mass-spring-damper model is:
The damping ratio is used to characterize the amount of damping in a system. This is a ratio of the actual damping over the amount of damping required to reach critical damping. The formula for the damping ratio ( ζ {\displaystyle \zeta } ) of the mass-spring-damper model is:
For example, metal structures (e.g., airplane fuselages, engine crankshafts) have damping factors less than 0.05, while automotive suspensions are in the range of 0.2–0.3. The solution to the underdamped system for the mass-spring-damper model is the following:
The value of X , the initial magnitude, and ϕ , {\displaystyle \phi ,} the phase shift , are determined by the amount the spring is stretched. The formulas for these values can be found in the references.
The major points to note from the solution are the exponential term and the cosine function. The exponential term defines how quickly the system “damps” down – the larger the damping ratio, the quicker it damps to zero. The cosine function is the oscillating portion of the solution, but the frequency of the oscillations is different from the undamped case.
The frequency in this case is called the "damped natural frequency", f d , {\displaystyle f_{\text{d}},} and is related to the undamped natural frequency by the following formula:
The damped natural frequency is less than the undamped natural frequency, but for many practical cases the damping ratio is relatively small and hence the difference is negligible. Therefore, the damped and undamped description are often dropped when stating the natural frequency (e.g. with 0.1 damping ratio, the damped natural frequency is only 1% less than the undamped).
The plots to the side present how 0.1 and 0.3 damping ratios effect how the system “rings” down over time. What is often done in practice is to experimentally measure the free vibration after an impact (for example by a hammer) and then determine the natural frequency of the system by measuring the rate of oscillation, as well as the damping ratio by measuring the rate of decay. The natural frequency and damping ratio are not only important in free vibration, but also characterize how a system behaves under forced vibration.
Both the damped and undamped natural frequencies can be estimate when the mode shapes are not known using the Rayleigh Quotient .
[ 11 ]
The behavior of the spring mass damper model varies with the addition of a harmonic force. A force of this type could, for example, be generated by a rotating imbalance.
Summing the forces on the mass results in the following ordinary differential equation:
The steady state solution of this problem can be written as:
The result states that the mass will oscillate at the same frequency, f , of the applied force, but with a phase shift ϕ . {\displaystyle \phi .}
The amplitude of the vibration “X” is defined by the following formula.
Where “r” is defined as the ratio of the harmonic force frequency over the undamped natural frequency of the mass–spring–damper model.
The phase shift, ϕ , {\displaystyle \phi ,} is defined by the following formula.
The plot of these functions, called "the frequency response of the system", presents one of the most important features in forced vibration. In a lightly damped system when the forcing frequency nears the natural frequency ( r ≈ 1 {\displaystyle r\approx 1} ) the amplitude of the vibration can get extremely high. This phenomenon is called resonance (subsequently the natural frequency of a system is often referred to as the resonant frequency). In rotor bearing systems any rotational speed that excites a resonant frequency is referred to as a critical speed .
If resonance occurs in a mechanical system it can be very harmful – leading to eventual failure of the system. Consequently, one of the major reasons for vibration analysis is to predict when this type of resonance may occur and then to determine what steps to take to prevent it from occurring. As the amplitude plot shows, adding damping can significantly reduce the magnitude of the vibration. Also, the magnitude can be reduced if the natural frequency can be shifted away from the forcing frequency by changing the stiffness or mass of the system. If the system cannot be changed, perhaps the forcing frequency can be shifted (for example, changing the speed of the machine generating the force).
The following are some other points in regards to the forced vibration shown in the frequency response plots.
Resonance is simple to understand if the spring and mass are viewed as energy storage elements – with the mass storing kinetic energy and the spring storing potential energy. As discussed earlier, when the mass and spring have no external force acting on them they transfer energy back and forth at a rate equal to the natural frequency. In other words, to efficiently pump energy into both mass and spring requires that the energy source feed the energy in at a rate equal to the natural frequency. Applying a force to the mass and spring is similar to pushing a child on swing, a push is needed at the correct moment to make the swing get higher and higher. As in the case of the swing, the force applied need not be high to get large motions, but must just add energy to the system.
The damper, instead of storing energy, dissipates energy. Since the damping force is proportional to the velocity, the more the motion, the more the damper dissipates the energy. Therefore, there is a point when the energy dissipated by the damper equals the energy added by the force. At this point, the system has reached its maximum amplitude and will continue to vibrate at this level as long as the force applied stays the same. If no damping exists, there is nothing to dissipate the energy and, theoretically, the motion will continue to grow into infinity.
In a previous section only a simple harmonic force was applied to the model, but this can be extended considerably using two powerful mathematical tools. The first is the Fourier transform that takes a signal as a function of time ( time domain ) and breaks it down into its harmonic components as a function of frequency ( frequency domain ). For example, by applying a force to the mass–spring–damper model that repeats the following cycle – a force equal to 1 newton for 0.5 second and then no force for 0.5 second. This type of force has the shape of a 1 Hz square wave .
The Fourier transform of the square wave generates a frequency spectrum that presents the magnitude of the harmonics that make up the square wave (the phase is also generated, but is typically of less concern and therefore is often not plotted). The Fourier transform can also be used to analyze non- periodic functions such as transients (e.g. impulses) and random functions. The Fourier transform is almost always computed using the fast Fourier transform (FFT) computer algorithm in combination with a window function .
In the case of our square wave force, the first component is actually a constant force of 0.5 newton and is represented by a value at 0 Hz in the frequency spectrum. The next component is a 1 Hz sine wave with an amplitude of 0.64. This is shown by the line at 1 Hz. The remaining components are at odd frequencies and it takes an infinite amount of sine waves to generate the perfect square wave. Hence, the Fourier transform allows you to interpret the force as a sum of sinusoidal forces being applied instead of a more "complex" force (e.g. a square wave).
In the previous section, the vibration solution was given for a single harmonic force, but the Fourier transform in general gives multiple harmonic forces. The second mathematical tool, the superposition principle , allows the summation of the solutions from multiple forces if the system is linear . In the case of the spring–mass–damper model, the system is linear if the spring force is proportional to the displacement and the damping is proportional to the velocity over the range of motion of interest. Hence, the solution to the problem with a square wave is summing the predicted vibration from each one of the harmonic forces found in the frequency spectrum of the square wave.
The solution of a vibration problem can be viewed as an input/output relation – where the force is the input and the output is the vibration. Representing the force and vibration in the frequency domain (magnitude and phase) allows the following relation:
H ( i ω ) {\displaystyle H(i\omega )} is called the frequency response function (also referred to as the transfer function , but not technically as accurate) and has both a magnitude and phase component (if represented as a complex number , a real and imaginary component). The magnitude of the frequency response function (FRF) was presented earlier for the mass–spring–damper system.
The phase of the FRF was also presented earlier as:
For example, calculating the FRF for a mass–spring–damper system with a mass of 1 kg, spring stiffness of 1.93 N/mm and a damping ratio of 0.1. The values of the spring and mass give a natural frequency of 7 Hz for this specific system. Applying the 1 Hz square wave from earlier allows the calculation of the predicted vibration of the mass. The figure illustrates the resulting vibration. It happens in this example that the fourth harmonic of the square wave falls at 7 Hz. The frequency response of the mass–spring–damper therefore outputs a high 7 Hz vibration even though the input force had a relatively low 7 Hz harmonic. This example highlights that the resulting vibration is dependent on both the forcing function and the system that the force is applied to.
The figure also shows the time domain representation of the resulting vibration. This is done by performing an inverse Fourier Transform that converts frequency domain data to time domain. In practice, this is rarely done because the frequency spectrum provides all the necessary information.
The frequency response function (FRF) does not necessarily have to be calculated from the knowledge of the mass, damping, and stiffness of the system—but can be measured experimentally. For example, if a known force over a range of frequencies is applied, and if the associated vibrations are measured, the frequency response function can be calculated, thereby characterizing the system. This technique is used in the field of experimental modal analysis to determine the vibration characteristics of a structure.
The simple mass–spring–damper model is the foundation of vibration analysis. The model described above is called a single degree of freedom (SDOF) model since the mass is assumed to only move up and down. In more complex systems, the system must be discretized into more masses that move in more than one direction, adding degrees of freedom. The major concepts of multiple degrees of freedom (MDOF) can be understood by looking at just a two degree of freedom model as shown in the figure.
The equations of motion of the 2DOF system are found to be:
This can be rewritten in matrix format:
A more compact form of this matrix equation can be written as:
where [ M ] , {\displaystyle {\begin{bmatrix}M\end{bmatrix}},} [ C ] , {\displaystyle {\begin{bmatrix}C\end{bmatrix}},} and [ K ] {\displaystyle {\begin{bmatrix}K\end{bmatrix}}} are symmetric matrices referred respectively as the mass, damping, and stiffness matrices. The matrices are NxN square matrices where N is the number of degrees of freedom of the system.
The following analysis involves the case where there is no damping and no applied forces (i.e. free vibration). The solution of a viscously damped system is somewhat more complicated. [ 12 ]
This differential equation can be solved by assuming the following type of solution:
Note: Using the exponential solution of { X } e i ω t {\displaystyle {\begin{Bmatrix}X\end{Bmatrix}}e^{i\omega t}} is a mathematical trick used to solve linear differential equations. Using Euler's formula and taking only the real part of the solution it is the same cosine solution for the 1 DOF system. The exponential solution is only used because it is easier to manipulate mathematically.
The equation then becomes:
Since e i ω t {\displaystyle e^{i\omega t}} cannot equal zero the equation reduces to the following.
This is referred to an eigenvalue problem in mathematics and can be put in the standard format by pre-multiplying the equation by [ M ] − 1 {\displaystyle {\begin{bmatrix}M\end{bmatrix}}^{-1}}
and if: [ M ] − 1 [ K ] = [ A ] {\displaystyle {\begin{bmatrix}M\end{bmatrix}}^{-1}{\begin{bmatrix}K\end{bmatrix}}={\begin{bmatrix}A\end{bmatrix}}} and λ = ω 2 {\displaystyle \lambda =\omega ^{2}\,}
The solution to the problem results in N eigenvalues (i.e. ω 1 2 , ω 2 2 , ⋯ ω N 2 {\displaystyle \omega _{1}^{2},\omega _{2}^{2},\cdots \omega _{N}^{2}} ), where N corresponds to the number of degrees of freedom. The eigenvalues provide the natural frequencies of the system. When these eigenvalues are substituted back into the original set of equations, the values of { X } {\displaystyle {\begin{Bmatrix}X\end{Bmatrix}}} that correspond to each eigenvalue are called the eigenvectors . These eigenvectors represent the mode shapes of the system. The solution of an eigenvalue problem can be quite cumbersome (especially for problems with many degrees of freedom), but fortunately most math analysis programs have eigenvalue routines.
The eigenvalues and eigenvectors are often written in the following matrix format and describe the modal model of the system:
A simple example using the 2 DOF model can help illustrate the concepts. Let both masses have a mass of 1 kg and the stiffness of all three springs equal 1000 N/m. The mass and stiffness matrix for this problem are then:
Then [ A ] = [ 2000 − 1000 − 1000 2000 ] . {\displaystyle {\begin{bmatrix}A\end{bmatrix}}={\begin{bmatrix}2000&-1000\\-1000&2000\end{bmatrix}}.}
The eigenvalues for this problem given by an eigenvalue routine is:
The natural frequencies in the units of hertz are then (remembering ω = 2 π f {\displaystyle \scriptstyle \omega =2\pi f} ) f 1 = 5.033 H z {\displaystyle \scriptstyle f_{1}=5.033\mathrm {\ Hz} } and f 2 = 8.717 Hz . {\displaystyle \scriptstyle f_{2}=8.717{\text{ Hz}}.}
The two mode shapes for the respective natural frequencies are given as:
Since the system is a 2 DOF system, there are two modes with their respective natural frequencies and shapes. The mode shape vectors are not the absolute motion, but just describe relative motion of the degrees of freedom. In our case the first mode shape vector is saying that the masses are moving together in phase since they have the same value and sign. In the case of the second mode shape vector, each mass is moving in opposite direction at the same rate.
When there are many degrees of freedom, one method of visualizing the mode shapes is by animating them using structural analysis software such as Femap , ANSYS or VA One by ESI Group . An example of animating mode shapes is shown in the figure below for a cantilevered Ɪ-beam as demonstrated using modal analysis on ANSYS. In this case, the finite element method was used to generate an approximation of the mass and stiffness matrices by meshing the object of interest in order to solve a discrete eigenvalue problem . Note that, in this case, the finite element method provides an approximation of the meshed surface (for which there exists an infinite number of vibration modes and frequencies). Therefore, this relatively simple model that has over 100 degrees of freedom and hence as many natural frequencies and mode shapes, provides a good approximation for the first natural frequencies and modes † . Generally, only the first few modes are important for practical applications.
^ Note that when performing a numerical approximation of any mathematical model, convergence of the parameters of interest must be ascertained.
The eigenvectors have very important properties called orthogonality properties. These properties can be used to greatly simplify the solution of multi-degree of freedom models. It can be shown that the eigenvectors have the following properties:
[ ╲ m r ╲ ] {\displaystyle {\begin{bmatrix}^{\diagdown }m_{r\diagdown }\end{bmatrix}}} and [ ╲ k r ╲ ] {\displaystyle {\begin{bmatrix}^{\diagdown }k_{r\diagdown }\end{bmatrix}}} are diagonal matrices that contain the modal mass and stiffness values for each one of the modes. (Note: Since the eigenvectors (mode shapes) can be arbitrarily scaled, the orthogonality properties are often used to scale the eigenvectors so the modal mass value for each mode is equal to 1. The modal mass matrix is therefore an identity matrix )
These properties can be used to greatly simplify the solution of multi-degree of freedom models by making the following coordinate transformation.
Using this coordinate transformation in the original free vibration differential equation results in the following equation.
Taking advantage of the orthogonality properties by premultiplying this equation by [ Ψ ] T {\displaystyle {\begin{bmatrix}\Psi \end{bmatrix}}^{T}}
The orthogonality properties then simplify this equation to:
This equation is the foundation of vibration analysis for multiple degree of freedom systems. A similar type of result can be derived for damped systems. [ 12 ] The key is that the modal mass and stiffness matrices are diagonal matrices and therefore the equations have been "decoupled". In other words, the problem has been transformed from a large unwieldy multiple degree of freedom problem into many single degree of freedom problems that can be solved using the same methods outlined above.
Solving for x is replaced by solving for q , referred to as the modal coordinates or modal participation factors.
It may be clearer to understand if { x } = [ Ψ ] { q } {\displaystyle {\begin{Bmatrix}x\end{Bmatrix}}={\begin{bmatrix}\Psi \end{bmatrix}}{\begin{Bmatrix}q\end{Bmatrix}}} is written as:
Written in this form it can be seen that the vibration at each of the degrees of freedom is just a linear sum of the mode shapes. Furthermore, how much each mode "participates" in the final vibration is defined by q, its modal participation factor.
An unrestrained multi-degree of freedom system experiences both rigid-body translation and/or rotation and vibration. The existence of a rigid-body mode results in a zero natural frequency. The corresponding mode shape is called the rigid-body mode.
|
https://en.wikipedia.org/wiki/Vibration
|
Vibration fatigue is a mechanical engineering term describing material fatigue , caused by forced vibration of random nature. An excited structure responds according to its natural-dynamics modes, which results in a dynamic stress load in the material points. [ 1 ] The process of material fatigue is thus governed largely by the shape of the excitation profile and the response it produces. As the profiles of excitation and response are preferably analyzed in the frequency domain it is practical to use fatigue life evaluation methods, that can operate on the data in frequency-domain , s power spectral density (PSD) .
A crucial part of a vibration fatigue analysis is the modal analysis , that exposes the natural modes and frequencies of the vibrating structure and enables accurate prediction of the local stress responses for the given excitation. Only then, when the stress responses are known, can vibration fatigue be successfully characterized.
The more classical approach of fatigue evaluation consists of cycle counting, using the rainflow algorithm and summation by means of the Palmgren-Miner linear damage hypothesis , that appropriately sums the damages of respective cycles. When the time history is not known, because the load is random ( e.g. a car on a rough road or a wind driven turbine ), those cycles can not be counted. Multiple time histories can be simulated for a given random process , but such procedure is cumbersome and computationally expensive . [ 2 ]
Vibration-fatigue methods offer a more effective approach, which estimates fatigue life based on moments of the PSD . This way, a value is estimated, that would otherwise be calculated with the time-domain approach. When dealing with many material nodes, experiencing different responses ( e.g. a model in a FEM package), time-histories need not be simulated. It then becomes viable, with the use of vibration-fatigue methods, to calculate fatigue life in many points on the structure and successfully predict where the failure will most probably occur.
In a random process, the amplitude can not be described as a function of time, because of its probabilistic nature. However, certain statistical properties can be extracted from a signal sample, representing a realization of a random process, provided the latter is ergodic . An important characteristics for the field of vibration fatigue is the amplitude probability density function , that describes the statistical distribution of peak amplitudes. Ideally, the probability of cycle amplitudes, describing the load severity, could then be deduced directly. However, as this is not always possible, the sought-after probability is often estimated empirically.
Random excitation of the structure produces different responses, depending on the natural dynamics of the structure in question. Different natural modes get excited and each greatly affects the stress distribution in material. The standard procedure is to calculate frequency response functions for the analyzed structure and then obtain the stress responses, based on given loading or excitation. [ 3 ] By exciting different modes, the spread of vibration energy over a frequency range directly affects the durability of the structure. Thus the structural dynamics analysis is a key part of vibration-fatigue evaluation.
Calculation of damage intensity is straightforward once the cycle amplitude distribution is known. This distribution can be obtained from a time-history simply by counting cycles. To obtain it from the PSD another approach must be taken.
Various vibration-fatigue methods estimate damage intensity based on moments of the PSD , which characterize the statistical properties of the random process. The formulas for calculating such estimate are empirical (with very few exceptions) and are based on numerous simulations of random processes with known PSD . As a consequence, the accuracy of those methods varies, depending on analyzed response spectra, material parameters and the method itself - some are more accurate than others. [ 4 ]
The most commonly used method is the one developed by T. Dirlik in 1985. [ 5 ] Recent research on frequency-domain methods of fatigue-life estimation [ 4 ] compared well established methods and also recent ones; conclusion showed that the methods by Zhao and Baker, developed in 1992 [ 6 ] and by Benasciutti and Tovo, developed in 2004 [ 7 ] are also very suitable for vibration-fatigue analysis. For narrow-band approximation of random process analytical expression for damage intensity is given by Miles. [ 8 ] There are some approaches with adaptation of narrow-band approximation; Wirsching and Light proposed the
empirical correction factor in 1980 [ 9 ] and Benasciutti presented α 0.75 in 2004. [ 10 ] In 2008, Gao and Moan published a spectral method that combines three narrow-band processes. [ 11 ] Implementation of those method is given in the Python open-source FLife [ 12 ] package.
Vibration fatigue methods find use wherever the structure experiences loading, that is caused by a random process . These can be the forces that bumps on the road extort on the car chassis , the wind blowing on the wind turbine , waves hitting an offshore construction or a marine vessel . Such loads are first characterized statistically, by measurement and analysis. The data is then used in the product design process. [ 13 ]
The computational effectiveness of vibration-fatigue methods in contrast to the classical approach, enables their use in combination with FEM software packages, to evaluate fatigue after the loading is known and the dynamic analysis has been performed. Use of the vibration-fatigue methods is well-suited, as structural analysis is studied in the frequency-domain .
Common practice in the automotive industry is the use of accelerated vibration tests . During the test, a part or a product is exposed to vibration , that are in correlation with those expected during the service-life of the product. To shorten the testing time, the amplitudes are amplified. The excitation spectra used are broad-band and can be evaluated most effectively using vibration-fatigue methods.
|
https://en.wikipedia.org/wiki/Vibration_fatigue
|
Vibration isolation is the prevention of transmission of vibration from one component of a system to others parts of the same system, as in buildings or mechanical systems . [ 1 ] Vibration is undesirable in many domains, primarily engineered systems and habitable spaces, and methods have been developed to prevent the transfer of vibration to such systems. Vibrations propagate via mechanical waves and certain mechanical linkages conduct vibrations more efficiently than others. Passive vibration isolation makes use of materials and mechanical linkages that absorb and damp these mechanical waves. Active vibration isolation involves sensors and actuators that produce disruptive interference that cancels-out incoming vibration.
"Passive vibration isolation" refers to vibration isolation or mitigation of vibrations by passive techniques such as rubber pads or mechanical springs, as opposed to "active vibration isolation" or "electronic force cancellation" employing electric power, sensors, actuators, and control systems.
Passive vibration isolation is a vast subject, since there are many types of passive vibration isolators used for many different applications. A few of these applications are for industrial equipment such as pumps, motors, HVAC systems, or washing machines; isolation of civil engineering structures from earthquakes (base isolation), [ 2 ] sensitive laboratory equipment, valuable statuary, and high-end audio.
A basic understanding of how passive isolation works, the more common types of passive isolators, and the main factors that influence the selection of passive isolators:
A passive isolation system, such as a shock mount , in general contains mass, spring, and damping elements and moves as a harmonic oscillator . The mass and spring stiffness dictate a natural frequency of the system. Damping causes energy dissipation and has a secondary effect on natural frequency.
Every object on a flexible support has a fundamental natural frequency. When vibration is applied, energy is transferred most efficiently at the natural frequency, somewhat efficiently below the natural frequency, and with increasing inefficiency (decreasing efficiency) above the natural frequency. This can be seen in the transmissibility curve, which is a plot of transmissibility vs. frequency.
Here is an example of a transmissibility curve. Transmissibility is the ratio of vibration of the isolated surface to that of the source. Vibrations are never eliminated, but they can be greatly reduced. The curve below shows the typical performance of a passive, negative-stiffness isolation system with a natural frequency of 0.5 Hz. The general shape of the curve is typical for passive systems. Below the natural frequency, transmissibility hovers near 1. A value of 1 means that vibration is going through the system without being amplified or reduced. At the resonant frequency, energy is transmitted efficiently, and the incoming vibration is amplified. Damping in the system limits the level of amplification. Above the resonant frequency, little energy can be transmitted, and the curve rolls off to a low value. A passive isolator can be seen as a mechanical low-pass filter for vibrations.
In general, for any given frequency above the natural frequency, an isolator with a lower natural frequency will show greater isolation than one with a higher natural frequency. The best isolation system for a given situation depends on the frequency, direction, and magnitude of vibrations present and the desired level of attenuation of those frequencies.
All mechanical systems in the real world contain some amount of damping. Damping dissipates energy in the system, which reduces the vibration level which is transmitted at the natural frequency. The fluid in automotive shock absorbers is a kind of damper, as is the inherent damping in elastomeric (rubber) engine mounts.
Damping is used in passive isolators to reduce the amount of amplification at the natural frequency. However, increasing damping tends to reduce isolation at the higher frequencies. As damping is increased, transmissibility roll-off decreases. This can be seen in the chart below.
Passive isolation operates in both directions, isolating the payload from vibrations originating in the support, and also isolating the support from vibrations originating in the payload. Large machines such as washers, pumps, and generators, which would cause vibrations in the building or room, are often isolated from the floor. However, there are a multitude of sources of vibration in buildings, and it is often not possible to isolate each source. In many cases, it is most efficient to isolate each sensitive instrument from the floor. Sometimes it is necessary to implement both approaches.
In Superyachts , the engines and alternators produce noise and vibrations. To solve this, the solution is a double elastic suspension where the engine and alternator are mounted with vibration dampers on a common frame. This set is then mounted elastically between the common frame and the hull. [ 7 ]
[ citation needed ]
Negative-Stiffness-Mechanism (NSM) vibration isolation systems offer a unique passive approach for achieving low vibration environments and isolation against sub-Hertz vibrations. "Snap-through" or "over-center" NSM devices are used to reduce the stiffness of elastic suspensions and create compact six-degree-of-freedom systems with low natural frequencies. Practical systems with vertical and horizontal natural frequencies as low as 0.2 to 0.5 Hz are possible. Electro-mechanical auto-adjust mechanisms compensate for varying weight loads and provide automatic leveling in multiple-isolator systems, similar to the function of leveling valves in pneumatic systems. All-metal systems can be configured which are compatible with high vacuums and other adverse environments such as high temperatures.
These isolation systems enable vibration-sensitive instruments such as scanning probe microscopes, micro-hardness testers and scanning electron microscopes to operate in severe vibration environments sometimes encountered, for example, on upper floors of buildings and in clean rooms. Such operation would not be practical with pneumatic isolation systems. [ citation needed ] Similarly, they enable vibration-sensitive instruments to produce better images and data than those achievable with pneumatic isolators. [ citation needed ]
The theory of operation of NSM vibration isolation systems is summarized, some typical systems and applications are described, and data on measured performance is presented. The theory of NSM isolation systems is explained in References 1 and 2. [ clarification needed ] It is summarized briefly for convenience.
A vertical-motion isolator is shown . It uses a conventional spring connected to an NSM consisting of two bars hinged at the center, supported at their outer ends on pivots, and loaded in compression by forces P. The spring is compressed by weight W to the operating position of the isolator, as shown in Figure 1. The stiffness of the isolator is K=K S -K N where K S is the spring stiffness and K N is the magnitude of a negative-stiffness which is a function of the length of the bars and the load P. The isolator stiffness can be made to approach zero while the spring supports the weight W.
A horizontal-motion isolator consisting of two beam-columns is illustrated in Figure. 2. Each beam-column behaves like two fixed-free beam columns loaded axially by a weight load W. Without the weight load the beam-columns have horizontal stiffness K S With the weight load the lateral bending stiffness is reduced by the "beam-column" effect. This behavior is equivalent to a horizontal spring combined with an NSM so that the horizontal stiffness is K = K S − K N {\displaystyle K=K_{S}-K_{N}} , and K N {\displaystyle K_{N}} is the magnitude of the beam-column effect. Horizontal stiffness can be made to approach zero by loading the beam-columns to approach their critical buckling load.
A six-DOF NSM isolator typically uses three isolators stacked in series: a tilt-motion isolator on top of a horizontal-motion isolator on top of a vertical-motion isolator. Figure 3 ( Ref. needed ) shows a schematic of a vibration isolation system consisting of a weighted platform supported by a single six-DOF isolator incorporating the isolators of Figures 1 and 2 ( Figures 1 and 2 are missing ). Flexures are used in place of the hinged bars shown in Figure 1. A tilt flexure serves as the tilt-motion isolator. A vertical-stiffness adjustment screw is used to adjust the compression force on the negative-stiffness flexures thereby changing the vertical stiffness. A vertical load adjustment screw is used to adjust for varying weight loads by raising or lowering the base of the support spring to keep the flexures in their straight, unbent operating positions.
The equipment or other mechanical components are necessarily linked to surrounding objects (the supporting joint - with the support; the unsupporting joint - the pipe duct or cable), thus presenting the opportunity for unwanted transmission of vibrations. Using a suitably designed vibration-isolator (absorber), vibration isolation of the supporting joint is realized. The accompanying illustration shows the attenuation of vibration levels, as measured before installation of the functioning gear on a vibration isolator as well as after installation, for a wide range of frequencies.
This is defined as a device that reflects and absorbs waves of oscillatory energy, extending from a piece of working machinery or electrical equipment, and with the desired effect being vibration insulation. The goal is to establish vibration isolation between a body transferring mechanical fluctuations and a supporting body (for example, between the machine and the foundation). The illustration shows a vibration isolator from the series «ВИ» (~"VI" in Roman characters), as used in shipbuilding in Russia, for example the submarine "St.Petersburg" (Lada). The depicted «ВИ» devices allow loadings ranging from 5, 40 and 300 kg. They differ in their physical sizes, but all share the same fundamental design. The structure consists of a rubber envelope that is internally reinforced by a spring. During manufacture, the rubber and the spring are intimately and permanently connected as a result of the vulcanization process that is integral to the processing of the crude rubber material. Under action of weight loading of the machine, the rubber envelope deforms, and the spring is compressed or stretched. Therefore, in the direction of the spring's cross section, twisting of the enveloping rubber occurs. The resulting elastic deformation of the rubber envelope results in very effective absorption of the vibration. This absorption is crucial to reliable vibration insulation, because it averts the potential for resonance effects. The amount of elastic deformation of the rubber largely dictates the magnitude of vibration absorption that can be attained; the entire device (including the spring itself) must be designed with this in mind. The design of the vibration isolator must also take into account potential exposure to shock loadings, in addition to the routine everyday vibrations. Lastly, the vibration isolator must also be designed for long-term durability as well as convenient integration into the environment in which it is to be used. Sleeves and flanges are typically employed in order to enable the vibration isolator to be securely fastened to the equipment and the supporting foundation.
Vibration isolation of unsupporting joint is realized in the device named branch pipe a of isolating vibration.
Branch pipe a of isolating vibration is a part of a tube with elastic walls for reflection and absorption of waves of the oscillatory energy extending from the working pump over wall of the pipe duct. Is established between the pump and the pipe duct. On an illustration is presented the image a vibration-isolating branch pipe of a series «ВИПБ». In a structure is used the rubber envelope, which is reinforced by a spring. Properties of an envelope are similar envelope to an isolator vibration. Has the device reducing axial effort from action of internal pressure up to zero.
Another technique used to increase isolation is to use an isolated subframe. This splits the system with an additional mass/spring/damper system. This doubles the high frequency attenuation rolloff , at the cost of introducing additional low frequency modes which may cause the low frequency behaviour to deteriorate. This is commonly used in the rear suspensions of cars with Independent Rear Suspension (IRS), and in the front subframes of some cars. The graph (see illustration) shows the force into the body for a subframe that is rigidly bolted to the body compared with the red curve that shows a compliantly mounted subframe. Above 42 Hz the compliantly mounted subframe is superior, but below that frequency the bolted in subframe is better.
Semiactive vibration isolators have received attention because they consume less power than active devices and controllability over passive systems.
Active vibration isolation systems contain, along with the spring, a feedback circuit which consists of a sensor (for example a piezoelectric accelerometer or a geophone), a controller , and an actuator . The acceleration (vibration) signal is processed by a control circuit and amplifier. Then it feeds the electromagnetic actuator, which amplifies the signal. As a result of such a feedback system, a considerably stronger suppression of vibrations is achieved compared to ordinary damping. Active isolation today is used for applications where structures smaller than a micrometer have to be produced or measured. A couple of companies produce active isolation products as OEM for research, metrology, lithography and medical systems. Another important application is the semiconductor industry. In the microchip production, the smallest structures today are below 20 nm, so the machines which produce and check them have to oscillate much less.
|
https://en.wikipedia.org/wiki/Vibration_isolation
|
The vibration theory of smell proposes that a molecule's smell character is due to its vibrational frequency in the infrared range. This controversial theory is an alternative to the more widely accepted docking theory of olfaction (formerly termed the shape theory of olfaction), which proposes that a molecule's smell character is due to a range of weak non-covalent interactions between its protein odorant receptor (found in the nasal epithelium ), such as electrostatic and Van der Waals interactions as well as H-bonding , dipole attraction, pi-stacking , metal ion, Cation–pi interaction , and hydrophobic effects, in addition to the molecule's conformation. [ 1 ] [ 2 ] [ 3 ]
The current vibration theory has recently been called the "swipe card" model, in contrast with "lock and key" models based on shape theory. [ 4 ] As proposed by Luca Turin , the odorant molecule must first fit in the receptor's binding site. [ citation needed ] Then it must have a vibrational energy mode compatible with the difference in energies between two energy levels on the receptor, so electrons can travel through the molecule via inelastic electron tunneling , triggering the signal transduction pathway. [ 5 ] The vibration theory is discussed in a popular but controversial book by Chandler Burr. [ 6 ] [ 7 ]
The odor character is encoded in the ratio of activities of receptors tuned to different vibration frequencies, in the same way that color is encoded in the ratio of activities of cone cell receptors tuned to different frequencies of light. An important difference, though, is that the odorant has to be able to become resident in the receptor for a response to be generated. The time an odorant resides in a receptor depends on how strongly it binds, which in turn determines the strength of the response; the odor intensity is thus governed by a similar mechanism to the "lock and key" model. [ 5 ] For a pure vibrational theory, the differing odors of enantiomers, which possess identical vibrations, cannot be explained. However, once the link between receptor response and duration of the residence of the odorant in the receptor is recognised, differences in odor between enantiomers can be understood: molecules with different handedness may spend different amounts of time in a given receptor, and so initiate responses of different intensities.
Seeing as there are some aroma molecules of different shapes that smell the same (eg. benzaldehyde, that gives the same scent to both almonds and/or cyanide), the shape "lock and key" model is not quite sufficient to explain what is going on. Experiments with olfaction, taking quantum mechanics into consideration, suggest that ultimately both theories might work in harmony - first the scent molecules need to fit, as in the docking theory of olfaction model, but then the molecular vibrations of the chemical/atom bonds take over. So in essence your sense of smell could be much more like your sense of hearing, where your nose could be 'listening' to the acoustic/vibrational bonds of aroma molecules.
Some studies support vibration theory while others challenge its findings.
The theory was first proposed by Malcolm Dyson in 1928 [ 8 ] and expanded by Robert H. Wright in 1954, after which it was largely abandoned in favor of the competing shape theory. A 1996 paper by Luca Turin revived the theory by proposing a mechanism, speculating that the G-protein-coupled receptors discovered by Linda Buck and Richard Axel were actually measuring molecular vibrations using inelastic electron tunneling as Turin claimed, rather than responding to molecular keys fitting molecular locks, working by shape alone. [ 5 ] [ 9 ] In 2007 a Physical Review Letters paper by Marshall Stoneham and colleagues at University College London and Imperial College London showed that Turin's proposed mechanism was consistent with known physics and coined the expression "swipe card model" to describe it. [ 10 ] A PNAS paper in 2011 by Turin, Efthimios Skoulakis, and colleagues at MIT and the Alexander Fleming Biomedical Sciences Research Center reported fly behavioral experiments consistent with a vibrational theory of smell. [ 11 ] The theory remains controversial. [ 3 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ]
A major prediction of Turin's theory is the isotope effect: that the normal and deuterated versions of a compound should smell different, although they have the same shape. A 2001 study by Haffenden et al. showed humans able to distinguish benzaldehyde from its deuterated version. [ 19 ] [ 20 ] However, this study has been criticized for lacking double-blind controls to eliminate bias and because it used an anomalous version of the duo-trio test . [ 21 ] In another study, tests with animals have shown fish and insects able to distinguish isotopes by smell. [ 22 ] [ 23 ] [ 24 ] [ 25 ]
Deuteration changes the heats of adsorption and the boiling and freezing points of molecules (boiling points: 100.0 °C for H 2 O vs. 101.42 °C for D 2 O; melting points: 0.0 °C for H 2 O, 3.82 °C for D 2 O), p K a (i.e., dissociation constant : 9.71×10 −15 for H 2 O vs. 1.95×10 −15 for D 2 O, cf. Heavy water ) and the strength of hydrogen bonding. Such isotope effects are exceedingly common, and so it is well known that deuterium substitution will indeed change the binding constants of molecules to protein receptors. [ 26 ] Any binding interaction of an odorant molecule with an olfactory receptor will therefore be likely to show some isotope effect upon deuteration, and the observation of an isotope effect in no way argues exclusively for a vibrational theory of olfaction.
A study published in 2011 by Franco, Turin, Mershin and Skoulakis shows both that flies can smell deuterium, and that to flies, a carbon-deuterium bond smells like a nitrile , which has a similar vibration. The study reports that drosophila melanogaster (fruit fly), which is ordinarily attracted to acetophenone , spontaneously dislikes deuterated acetophenone. This dislike increases with the number of deuteriums. (Flies genetically altered to lack smell receptors could not tell the difference.) Flies could also be trained by electric shocks either to avoid the deuterated molecule or to prefer it to the normal one. When these trained flies were then presented with a completely new and unrelated choice of normal vs. deuterated odorants, they avoided or preferred deuterium as with the previous pair. This suggested that flies were able to smell deuterium regardless of the rest of the molecule. To determine whether this deuterium smell was actually due to vibrations of the carbon-deuterium (C-D) bond or to some unforeseen effect of isotopes, the researchers looked to nitriles, which have a similar vibration to the C-D bond. Flies trained to avoid deuterium and asked to choose between a nitrile and its non-nitrile counterpart did avoid the nitrile, lending support to the idea that the flies are smelling vibrations. [ 25 ] Further isotope smell studies are under way in fruit flies and dogs. [ 27 ]
Carvone presented a perplexing situation to vibration theory. Carvone has two isomers , which have identical vibrations, yet one smells like mint and the other like caraway (for which the compound is named).
An experiment by Turin filmed by the 1995 BBC Horizon documentary "A Code in the Nose" consisted of mixing the mint isomer with butanone , on the theory that the shape of the G-protein-coupled receptor prevented the carbonyl group in the mint isomer from being detected by the "biological spectroscope". The experiment succeeded with the trained perfumers used as subjects, who perceived that a mixture of 60% butanone and 40% mint carvone smelled like caraway.
According to Turin's original paper in the journal Chemical Senses , the well documented smell of borane compounds is sulfurous, though these molecules contain no sulfur . He proposes to explain this by the similarity in frequency between the vibration of the B-H bond and the S-H bond. [ 5 ] However, it has been pointed out that for o -carborane, which has a very strong B−H stretch at 2575 cm −1 , the "onion-like odor of crude commercial o -carborane is replaced by a pleasant camphoraceous odor on careful purification, reflecting the method for commercial preparation of o -carborane from reactions promoted by onion-smelling diethyl sulfide, which is removed on purification." [ 3 ]
Biophysical simulations published in Physical Review Letters in 2006 suggest that Turin's proposal is viable from a physics standpoint. [ 10 ] [ 28 ] However, Block et al. in their 2015 paper in Proceedings of the National Academy of Sciences indicate that their theoretical analysis shows that "the proposed electron transfer mechanism of the vibrational frequencies of odorants [ 10 ] could be easily suppressed by quantum effects of nonodorant molecular vibrational modes". [ 17 ]
A 2004 paper published in the journal Organic Biomolecular Chemistry by Takane and Mitchell shows that odor descriptions in the olfaction literature correlate with EVA descriptors, which loosely correspond to the vibrational spectrum, better than with descriptors based on the two dimensional connectivity of the molecule. The study did not consider molecular shape. [ 29 ]
Turin points out that traditional lock-and-key receptor interactions deal with agonists , which increase the receptor's time spent in the active state, and antagonists , which increase the time spent in the inactive state. In other words, some ligands tend to turn the receptor on and some tend to turn it off. As an argument against the traditional lock-and-key theory of smell, very few olfactory antagonists have been found.
In 2004, a Japanese research group published that an oxidation product of isoeugenol is able to antagonize, or prevent, mice olfactory receptor response to isoeugenol. [ 30 ]
Three predictions by Luca Turin on the nature of smell, using concepts of vibration theory, were addressed by experimental tests published in Nature Neuroscience in 2004 by Vosshall and Keller. [ 21 ] The study failed to support the prediction that isotopes should smell different, with untrained human subjects unable to distinguish acetophenone from its deuterated counterpart. [ 10 ] [ 28 ] [ 34 ] This study also pointed to experimental design flaws in the earlier study by Haffenden. [ 19 ] In addition, Turin's description of the odor of long-chain aldehydes as alternately (1) dominantly waxy and faintly citrus and (2) dominantly citrus and faintly waxy was not supported by tests on untrained subjects, despite anecdotal support from fragrance industry professionals who work regularly with these materials. Vosshall and Keller also presented a mixture of guaiacol and benzaldehyde to subjects, to test Turin's theory that the mixture should smell of vanillin . Vosshall and Keller's data did not support Turin's prediction. However, Vosshall says these tests do not disprove the vibration theory. [ 35 ]
In response to the 2011 PNAS study on flies, Vosshall acknowledged that flies could smell isotopes but called the conclusion that smell was based on vibrations an "overinterpretation" and expressed skepticism about using flies to test a mechanism originally ascribed to human receptors. [ 27 ] For the theory to be confirmed, Vosshall stated there must be further studies on mammalian receptors. [ 36 ] Bill Hansson, an insect olfaction specialist, raised the question of whether deuterium could affect hydrogen bonds between the odorant and receptor. [ 37 ]
In 2013, Turin and coworkers confirmed Vosshall and Keller's experiments showing that even trained human subjects were unable to distinguish acetophenone from its deuterated counterpart. [ 38 ] At the same time Turin and coworkers reported that human volunteers were able to distinguish cyclopentadecanone from its fully deuterated analog. To account for the different results seen with acetophenone and cyclopentadecanone, Turin and coworkers assert that "there must be many C-H bonds before they are detectable by smell. In contrast to acetophenone which contains only 8 hydrogens, cyclopentadecanone has 28. This results in more than 3 times the number of vibrational modes involving hydrogens than in acetophenone, and this is likely essential for detecting the difference between isotopomers." [ 38 ] [ 39 ] Turin and coworkers provide no quantum mechanical justification for this latter assertion. Note that the correct term for compounds differing in the number of isotopic substitutions is isotopologue ; isotopomers differ only in the position of the substitutions.
Vosshall , in commenting on Turin's work, notes that "the olfactory membranes are loaded with enzymes that can metabolise odorants, changing their chemical identity and perceived odour. Deuterated molecules would be poor substrates for such enzymes, leading to a chemical difference in what the subjects are testing. Ultimately, any attempt to prove the vibrational theory of olfaction should concentrate on actual mechanisms at the level of the receptor, not on indirect psychophysical testing." [ 15 ] Richard Axel co-recipient of the 2004 Nobel prize for physiology for his work on olfaction, expresses a similar sentiment, indicating that Turin's work "would not resolve the debate – only a microscopic look at the receptors in the nose would finally show what is at work. Until somebody really sits down and seriously addresses the mechanism and not inferences from the mechanism... it doesn't seem a useful endeavour to use behavioural responses as an argument". [ 13 ]
In response to the 2013 paper on cyclopentadecanone, [ 38 ] Block et al. [ 17 ] report that the human musk -recognizing receptor, OR5AN1, identified using a heterologous olfactory receptor expression system and robustly responding to cyclopentadecanone and muscone (which has 30 hydrogens), fails to distinguish isotopologues of these compounds in vitro. Furthermore, the mouse (methylthio)methanethiol-recognizing receptor, MOR244-3, as well as other selected human and mouse olfactory receptors , responded similarly to normal, deuterated, and carbon-13 isotopologues of their respective ligands, paralleling results found with the musk receptor OR5AN1. Based on these findings, the authors conclude that the proposed vibration theory does not apply to the human musk receptor OR5AN1, mouse thiol receptor MOR244-3, or other olfactory receptors examined. Additionally, theoretical analysis by the authors shows that the proposed electron transfer mechanism of the vibrational frequencies of odorants could be easily suppressed by quantum effects of nonodorant molecular vibrational modes. The authors conclude: "These and other concerns about electron transfer at olfactory receptors , together with our extensive experimental data, argue against the plausibility of the vibration theory."
In commenting on this work, Vosshall writes "In PNAS, Block et al.... shift the "shape vs. vibration" debate from olfactory psychophysics to the biophysics of the ORs themselves. The authors mount a sophisticated multidisciplinary attack on the central tenets of the vibration theory using synthetic organic chemistry, heterologous expression of olfactory receptors , and theoretical considerations to find no evidence to support the vibration theory of smell." [ 1 ] While Turin comments that Block used "cells in a dish rather than within whole organisms" and that "expressing an olfactory receptor in human embryonic kidney cells doesn't adequately reconstitute the complex nature of olfaction ...", Vosshall responds "Embryonic kidney cells are not identical to the cells in the nose ... but if you are looking at receptors, it's the best system in the world." [ 40 ] In a Letter to the Editor of PNAS, Turin et al. [ 41 ] raise concerns about Block et al. [ 17 ] and Block et al. respond. [ 42 ]
Recently, Saberi and Allaei have suggested that a functional relationship exists between molecular volume and the olfactory neural response. The molecular volume is an important factor, but it is not the only factor that determines the response of ORNs. The binding affinity of an odorant-receptor pair is affected by their relative sizes. The maximum affinity can be attained when the molecular volume of an odorant matches the volume of the binding pocket. [ 43 ] A recent study [ 44 ] describes the responses of primary olfactory neurons in tissue culture to isotopes and finds that a small fraction of the population (<1%) clearly discriminates between isotopes, some even giving an all-or-or -none response to H or D isotopologues of octanal. The authors attribute this to differences in hydrophobicity between normal and deuterated odorants.
|
https://en.wikipedia.org/wiki/Vibration_theory_of_olfaction
|
The technique of vibrational analysis with scanning probe microscopy allows probing vibrational properties of materials at the submicrometer scale, and even of individual molecules. [ 1 ] [ 2 ] [ 3 ] This is accomplished by integrating scanning probe microscopy (SPM) and vibrational spectroscopy ( Raman scattering or/and Fourier transform infrared spectroscopy , FTIR). This combination allows for much higher spatial resolution than can be achieved with conventional Raman/FTIR instrumentation. The technique is also nondestructive, requires non-extensive sample preparation, and provides more contrast such as intensity contrast, polarization contrast and wavelength contrast, as well as providing specific chemical information and topography images simultaneously.
Near-field scanning optical microscopy (NSOM) was described in 1984, [ 4 ] and used in many applications since then. [ 5 ] The combination of Raman scattering and NSOM techniques was first realized in 1995, when it was used for imaging a Rb -doped KTP crystal at a spatial resolution of 250 nm. [ 6 ]
NSOM employs two different methods for data collection and analysis: the fiber tip aperture approach and the apertureless metal tip approach. [ 1 ] NSOM with aperture probes has a smaller aperture that can increase the spatial resolution of NSOM; however, the transmission of light to the sample and the collection efficiency of the scattered/emitted light is also diminished. [ 7 ] The apertureless near-field scanning microscopy (ANSOM) was developed in the 1990s. ANSOM employs a metalized tip instead of an optical fiber probe. The performance of the ANSOM strongly depends on the electric field enhancement factor of the metalized tip. This technique is based on surface plasmon resonance (SPR) which is the precursor of tip-enhanced Raman scattering (TERS) and surface-enhanced Raman scattering (SERS).
In 1997, Martin and Girard demonstrated theoretically that electric field under a metallic or dielectric tip (belonging to NSOM apertureless technique) can be strongly enhanced if the incident field is along the tip axis. Since then a few groups have reported Raman or fluorescence enhancement in near field optical spectroscopy by apertureless microscopy. [ 8 ] In 2000, T. Kalkbrenner et al. used a single gold particle as a probe for apertureless scanning and presented images of an aluminium film with 3 μm holes on a glass substrate. [ 9 ] The resolution of this apertureless method was 100 nm, that is comparable to that of fiber-based systems [ 9 ] Recently, a carbon nanotube (CNT) having a conical end, tagged with gold nanoparticles, was applied as a nanometer-resolution optical probe tip for NSOM. [ 10 ] NSOM images were obtained with a spatial resolution of ~5 nm, demonstrating the potential of a composite CNT probe tip for nanoscale-resolution optical imaging.
There are two options for realizing apertureless NSOM-Raman technique: TERS and SERS. TERS is frequently used for apertureless NSOM-Raman and can significantly enhance the spatial resolution. This technique requires a metal tip to enhance the signal of the sample. That is why an AFM metal tip is usually used for enhancing the electric field for molecule excitation. Raman spectroscopy was combined with AFM in 1999. [ 11 ] [ 12 ] A very narrow aperture of the tip was required to obtain a relatively high spatial resolution; such aperture reduced the signal and was difficult to prepare. In 2000, Stȍckle et al. [ 13 ] first designed a setup combining apertureless NSOM, Raman and AFM techniques, in which the tip had a 20 nm thick granular silver film on it. They reported a large gain in the Raman scattering intensity of a dye film ( brilliant cresyl blue ) deposited on a glass substrate if a metal-coated AFM tip was brought very close to the sample. About 2000-fold enhancement of Raman scattering and a spatial resolution of ~55 nm were achieved. [ 14 ]
Similarly, Nieman et al . [ 15 ] used an illuminated AFM tip coated with a 100 nm thick film of gold to enhance Raman scattering from polymers samples and achieved a resolution of 100 nm. In the early research of TERS, the most commonly used coating materials for the tip probe were silver and gold. [ 14 ] [ 15 ] High-resolution spatial maps of Raman signals were obtained with this technique from molecular films of such compounds as brilliant cresyl blue , malachite green isothiocyanate and rhodamine 6G , [ 16 ] as well as individual carbon nanotubes. [ 17 ]
IR near-field scanning optical microscopy (IR-NSOM) is a powerful spectroscopic tool because it allows subwavelength resolution in IR spectroscopy. Previously, IR-NSOM was realized by applying a solid immersion lens with a refractive index of n , which shortens wavelength ( λ ) to ( λ/n ), compared to FTIR-based IR microscopy. [ 18 ] In 2004, an IR-SNOM achieved a spatial resolution ~ λ /7 that is less than 1 μm. [ 18 ] This resolution was further improved to about λ /60 that is 50–150 nm for a boron nitride thin film sample. [ 19 ]
IR-NSOM uses an AFM to detect the absorption response of a material to the modulated infrared radiation from an FTIR spectrometer and therefore is also referred to as AFM/FTIR spectroscopy. Two approaches have been used to measure the response of polymer systems to infrared absorption. The first mode relies on the AFM contact mode, and the second mode of operation employs a scanning thermal microscopy probe (invented in 1986 [ 20 ] ) to measure the polymer's temperature increase. In 2007, AFM was combined with infrared attenuated total reflection (IR-ATR) spectroscopy to study the dissolution process of urea in a cyclohexane / butanol solution with a high spatial resolution. [ 21 ]
There are two modes for the operation of NSOM technique, [ 5 ] [ 22 ] with and without an aperture. These two mode have also been combined with the near-field Raman spectroscopy. [ 7 ] [ 23 ] [ 24 ] The near-field aperture must be nanosized that complicates the probe manufacturing process. [ 25 ] Also, the aperture method usually has a very weak signal due to weak excitation and Raman scattering signal. Overall, these factors lower the signal-to-noise ratio in aperature based NSOM/Raman technique. Apertureless probes are based on a metal-coated tip and provide a stronger signal. [ 26 ]
Although the apertureless mode is more promising than the aperture mode, the latter is more widely used because of easier instrumental setup and operation. To obtain a high resolution Raman micrograph/spectrum, the following conditions should be met: (1) the size of the aperture must be on the order of the wavelength of the excitation light. (2) The distance from the tip of the probe to the sample must be smaller than excitation wavelength. (3) The instrument must remain stable over a long time. An important AFM feature is the ability to accurately control the distance between the sample and probe tip, which is the reason why the AFM-Raman combination is preferred for realizing Raman-NSOM.
The main drawback of the aperture mode is that the small aperture size reduces the signal intensity and is difficult to fabricate. Recently, researchers have focused on the apertureless mode, which utilizes SPR theory to produce stronger signals. There are two techniques supporting this mode: SERS and TERS.
Theory and instrumentation of Raman/AFM and IR/AFM combine the theory of SPR (AFM and NSOM) and Raman scattering, and this combination is based on TERS. In TERS, the electric field of excitation source induces an SPR in the tip of the probe. If the electric field vector of the incidence light is perpendicular ( s-polarized ) to the metal tip axis, the free electrons are driven to the sides lateral of the tip. If it is parallel (p-polarized) to the tip axis, the free electrons on the surface of the metal are confined to the end of the apex of tip. As a consequence, there is a very large electric-field enhancement which is sensed by the molecules close to it leading to a stronger signal. [ 26 ]
A typical approach in a TERS experiment is to focus the laser beam on a metal tip with the light polarized along the tip axis, followed by collection of the surface-enhanced Raman scattered light from the sample in the enhancement zone of the tip using optics. [ 14 ]
Depending on the sample and experiment, different illumination geometries have been applied in TERS experiments, as shown in figure 4. With p-polarized (parallel to the surface normal) incidence light, the plasmon excitation at the tip is most efficient. If the focusing objective lens is also used for collecting the scattered photons (backscattering geometry), the optimum angle is around 55° with respect to the surface normal. This is because the scattering lobe is maximum with this configuration and it provides a much enhanced signal. [ 27 ] The setup of figure 4(A) is usually used for the large thick samples. Setup (B) handles semi-transparent or transparent samples, such as single cells, tissue samples and biopolymers. [ 14 ] The setup of figure 4(C) is preferred for opaque samples because all the light would be focused by the parabolic mirror .
Both TERS and SERS rely on a localized surface plasmon for increasing the ought-to-be weak Raman signal. [ 29 ] The only difference between them is that the sample in SERS has a rough surface that hinders application of a sharp AFM-like tip. TERS, on the other hand, uses a metal-coated tip having some roughness at nanoscale. [ 30 ] [ 31 ] The “hot spot” theory [ 32 ] is very popular in explaining the large enhancement in the signal. That is, the signal from “hot spots” on the surface of the sample dominates the total signal from the sample. [ 33 ] This is also reinforced by the fact that the distance between nanoparticles and sample is an important factor in obtaining high Raman signal.
The Raman/AFM technique has two approaches: aperture and apertureless, and the apertureless mode is realized with SERS and TERS. Figure 5 is the example of an integrated TERS system. It shows that there are five main components for a whole integrated TERS (apertureless) system. These components are: microscope, one objective lens, one integrated AFM head, a Raman spectrometer and a CCD. The laser is focused on the sample, on piezo-stage and the AFM tip by the moving the laser beam along the tip. The movement of the laser beam is achieved by the mirror in the top left corner. The XYZ piezo-stage in the left bottom holds the sample. In this design, the laser beam is focused on the sample through an objective lens, and the scattered light is collected by the same lens.
This setup utilizes a low contact-pressure to reduce the damage to the AFM tip and sample. [ 21 ] The laser power is typically below 1 mW. [ 21 ] The notch filter can filter Rayleigh scattering from the excitation laser light from the back of the cantilever. The laser beam is focused on the apex of the gold-coated AFM tip and the sample. The laser scanning is completed by the moving the mirror across the approaching tip. A small enhance in background occurs when the laser spot focuses on the tip area. The movement of the XYZ piezo-stage finishes the sample scanning. The wide red signal is Raman signal which is collected through the objective lens. The same lens is also used for excitation of the sample and collecting the Raman signal.
Because of the diffraction limit in the resolution of conventional lens-based microscopes, namely D = 0.61 λ /nsinθ, [ 34 ] the maximum resolution obtainable with an optical microscope is ~200 nm. A new type of lens using multiple scattering of light allowed to improve the resolution to about 100 nm. [ 35 ] Several new microscopy techniques with a sub-nanometer resolution have been developed in the last several decades, such as electron microscopy ( SEM and TEM ) and scanning probe microscopy (NSOM, STM and AFM). SPM differs from other techniques in that the excitation and signal collection are very close (less than diffraction limit distance) to the sample. Instead of using a conventional lens to obtain magnified images of samples, an SPM scans across the sample with a very sharp probe. Whereas SEM and TEM usually require vacuum and an extensive sample preparation, SPM measurements can be performed in atmospheric or liquid conditions.
Despite the achievable resolution of atomic scale for AFM and NSOM techniques, it does not provide chemical information of the sample. The infrared part of the electromagnetic spectrum covers molecular vibrations which can characterize chemical bonding within the sample. [ 36 ]
By combining SPM and vibrational spectroscopy, AFM/IR-NSOM and AFM-IR have emerged as useful characterization tools that integrate the high spatial resolution abilities of AFM with IR spectroscopy. [ 37 ] [ 38 ] [ 39 ] [ 40 ] [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ] This new technique can be referred to as AFM-FTIR, AFM-IR and NSOM/FTIR. AFM and NSOM can be used to detect the response when a modulated infrared radiation generated by an FTIR spectrometer is absorbed by a material. In the AFM-IR technique the absorption of the radiation by sample will cause a rapid thermal expansion wave which will be transferred to the vibrational modes of the AFM cantilever. Specifically, thermal expansion wave induces a vertical displacement of the ATM tip (Figure 6). [ 47 ] A local IR absorption spectrum then can be obtained through the measurement of the amplitude of the cantilever, which is a function of the IR source wavelength. For example, when the radiation laser wavelength is tuned at the resonance frequency with the vibrational absorption frequency of the sample, the displacement intensity of the cantilever will increase until the laser wavelength reaches the maximum of sample absorption. [ 47 ] The displacement of the cantilever will then be reduced as the laser wavelength is tuned past the absorption maximum. This approach can map chemical composition beyond the diffraction-limit resolution and can also provide three-dimensional topographic, thermal and mechanical information at the nanoscale. Overall, it overcomes the resolution limit of traditional IR spectroscopy and adds chemical and mechanical mapping to the AFM and NSOM.
The ideal IR source should be monochromatic and tunable within a wide range of wavelength. According to T ∝ d 4 / λ 4 , where T is the transmission coefficient, d the aperture diameter and λ is wavelength, the aperture-based NSOM/FTIR transmission is even more limited due to the long infrared wavelength; [ 48 ] [ 49 ] therefore, an intense IR source is needed to offset the low transmission through the optical fiber. The common bright IR light sources are the free-electron laser (FEL), [ 2 ] [ 39 ] [ 45 ] color-center lasers, CO 2 lasers and laser diodes . FEL is an excellent IR source, with 2–20 μm spectral range, [ 50 ] [ 51 ] short pulses (picosecond) and high average power (0.1-1 W). Alternately, a tabletop picosecond optical parametric oscillator (OPO) can be used which is less expensive, but has a limited tunability and a lower power-output. [ 44 ] [ 52 ]
The essence of NSOM/FTIR is that it allows the detection of non-propagating evanescent waves in the near-field (less than one wavelength from the sample), thus yielding high spatial resolution. Depending on the detection modes of these non-propagating evanescent waves, two NSOM/FTIR instrumentations are available: apertureless NSOM/FTIR and aperture-based NSOM/FTIR.
In aperture-based NSOM/FTIR, the probe is a waveguide with a tapered tip with a very small, sub-wavelength size aperture. When the aperture is brought into the near-field, it collects the non-propagating light and guides it to the detector. In general, there are two modes when the aperture is scanned over the sample: illumination mode and collection mode (Figure 7).
The high-quality infrared fiber tip is very important in realizing NSOM/FTIR technique. There are several types of fibers, such as sapphire , chalcogenide glass , fluoride glass and hollow silica guides. [ 53 ] Chalcogenide glasses are widely used because of their high transmittance in the broad IR range of 2–12 μm. [ 54 ] The fluoride fibers also exhibit low transmitting losses beyond 3.0 μm.
The probe is a sharp metal tip ending with a single or a few atoms. The sample is illuminated from far-field and the radiation is focused at the contact area between probe and sample. When this tip approaches the sample, usually within 10 nm, the incident electromagnetic field is enhanced due to the resonant surface plasma excitation as well as due to hot-spots in the sharp tip. The dipole interaction between the tip and sample change the non-propagating waves into propagating waves by scattering, and a detector collects the signal in the far-field. An apertureless NSOM/FTIR usually has better resolution (~5–30 nm) compared with aperture-based NSOM/FTIR (~50–150 nm). One main challenge in apertureless NSOM/FTIR is a strong background signal because the scattering is obtained from both near-field and remote area of the probe. Thus, the small near-field contribution to the signal has to be extracted from the background. One solution is to use a very flat sample with only optical spatial fluctuation. [ 55 ] Another solution is to apply constant-height mode scanning or pseudo-constant-height mode scanning. [ 56 ]
Figure 8 shows the experimental setup used in NSOM/FTIR in the external reflection mode. FEL source is focused on the sample from the far-field using a mirror. The distance between the probe and a sample is kept at a few nanometers during scanning.
Figure 9 is the cross-section of a NSOM/FTIR instrument. As shown below, sample is placed on a piezo-electric tube scanner, in which the x-y tube has four parts, namely x+, x-, y+ and y-. Lateral (x-y plane) oscillation of the fiber tip is induced by applying an AC voltage to a dither piezo-scanner. Also, the fiber tip is fixed to a bimorph piezo-scanner so that the amplitude of the oscillation of the tip can be monitored through the scanner.
The spatial resolution of an AFM-IR instrument is related to the contact area between the probe and sample. [ 58 ] The contact area is given by a 3 = 3 PR /4 E * and 1/ E * = (1- n 1 2 )/ E 1 + (1- n 2 2 )/ E 2 , where P is the force employed to the probe, n 1 and n 2 represent the Poisson ratios of the sample and probe, respectively, and E 1 and E 2 are the elastic moduli of the sample and probe materials respectively. [ 59 ] Typically, an AFM-IR has a lateral spatial resolution of 10–400 nm, [ 46 ] for example, 100 nm, [ 43 ] λ /150, [ 40 ] and λ /400. [ 41 ] Recently, Ruggeri et al. have demonstrated the acquisition of infrared absorption spectra and chemical maps at the single molecule level in the case of protein molecules with ca. 10 nm diameter and a molecular weight of 400 kDa.
In AFM-IR, an AFM probe is used to measure the absorption response of the sample to infrared radiation. The general approach for AFM/FTIR is shown in Figure 10. [ 60 ]
There are a few different experimental setups when the infrared radiation is projected onto the sample as shown below: top, side, and bottom illumination setups (Figure 11). [ 3 ]
In the first developed setup of AFM-IR , a sample is mounted onto an infrared-transparent zinc selenide prism for excitation purposes (Figure 12), then an optical parametric oscillator (OPO)-based tunable IR lased is radiated on the molecules to be probed by the instrument. Similar to conventional ATR spectroscopy, IR beam illuminates the sample through total internal reflection mechanism (Figure 12). The sample will heat up while absorbing radiation which causes a rapid thermal expansion of the sample surface. [ 40 ] [ 44 ] This expansion will increase the resonant oscillations of the AFM cantilever in a characteristic ringdown pattern (ringdown patterns means the decay of cantilever oscillation exponential in nature [ 44 ] ). Through Fourier transformation analysis, the signal could be isolated to obtain the amplitudes and frequencies of the oscillations. The amplitudes of the cantilever provide information of local absorption spectra, whereas the oscillation frequencies depend on the mechanical stiffness of the sample (Figure 12). [ 43 ] [ 44 ]
NSOM combined with FTIR/Raman techniques can provide local chemical information together with topographical details. This technique is non-destructive and can work in a variety of environments (liquids), for example, when detecting single biomolecules. [ 18 ] [ 59 ] [ 61 ] [ 62 ] The illuminated area of sample is relatively big at around 1 μm. However, the sampling area is only ~10 nm. This means that a strong background from an unclean tip contributes to the overall signal, hindering the signal analysis. [ 56 ]
The Raman spectroscopy in general could be time-consuming due to the low scattering efficiency (<1 in 107 photons). It usually takes several minutes to accumulate a conventional Raman spectrum, and this time could be much longer in Raman-NSOM; for example, 9 hours for a 32×32-pixel image. [ 6 ] [ 19 ] As to near-field IR/AFM, high optical losses in aqueous environments (water is strongly absorbing in the IR range) reduces the signal-to-noise ratio. [ 18 ] [ 63 ]
Improving the resolution and enhancing the instrumentation with user-friendly hardware and software will make AFM/NSOM coupled with IR/Raman a useful characterization tool in many areas including biomedical, materials and life sciences. [ 64 ] For example, this technique was used in detecting the spin-cast thin film of poly(dimethylsiloxane) with polystyrene on it by scanning the tip over the sample. The shape and size of polystyrene fragments was detected at a high spatial resolution due to its high absorption at specific resonance frequencies. [ 65 ] Other examples include inorganic boron nitride thin films characterization with IR-NSOM. [ 16 ] The images of single molecule rhodamine 6G (Rh-6G) was obtained with a spatial resolution of 50 nm. [ 66 ] These techniques can be also used in numerous biological related applications including the analysis of plant materials, bone, and single cells. Biological application was demonstrated by detecting details of conformation changes of cholesteryl-oleate caused by FEL irradiation with a spatial resolution below the diffraction limit. [ 67 ] Researchers also used Raman/NSOM in tracking the formation of energy-storing polymer polyhydroxybutyrate in bacteria Rhodobacter capsulatus . [ 68 ]
This characterization tool may also help in the kinetic studies on physical and chemical processes at a wide variety of surfaces giving chemical specificity via IR spectroscopy as well as high-resolution imaging via AFM. [ 18 ] For example, the study of the hydrogen termination of Si (100) surface was performed by studying the absorbance of Si-O bond to characterize the reaction between silicon surface and atmospheric oxygen. [ 69 ] Studies were also conducted where the reactivity of a polymer, a 1000-nm-thick poly-(tert-butylmethacrylate) (PTBMA) combined with a photochemically modified 500-nm-thick poly(methacrylic acid) (PMAA), with water vapor depicted the different absorption bands before and after water uptake by the polymer. Not only the increased swell of PMAA (280 nm) was observed but also the different absorption ability of water was shown by the different transmission of IR light at a much smaller dimension (<500 nm). These results are related to polymer, chemical and biological sensors, and tissue engineering and artificial organ studies. [ 70 ] Because of their high spatial resolution, NSOM/AFM-Raman/IR techniques can be used for measuring the width of multilayer films, including layers which are too small (in the x and y directions) to be probed with conventional IR or Raman spectroscopy. [ 39 ]
|
https://en.wikipedia.org/wiki/Vibrational_analysis_with_scanning_probe_microscopy
|
A vibrational bond is a chemical bond that happens between two very large atoms , like bromine , and a very small atom, like hydrogen , at very high energy states. Vibrational bonds only exist for a few milliseconds . This bond is detectable through modern analytic chemistry and is significant because it affects the rate at which other reactions can occur.
Vibrational bonds were mathematically predicted almost thirty years before they were experimentally observed. The original theoretical calculations had been carried out by D.C. Clary and J.N.L Connor during the early 1980s. Together they hypothesized that with very large atoms and small atoms at high energy states, the elements would stabilize and create temporary bonds for very short periods of time. The vibrational bond would be weaker than any currently known bond, like the commonly known ionic or covalent bonds. [ 1 ]
One year after the theoretical discovery of vibrational bonds, J. Manz and his team confirmed the calculations that were previously made, and elaborated on them by showing that the vibrational bonds were most likely to occur during symmetric reactions, but stated that vibrational bonds may also be possible with asymmetric reactions. [ 2 ] Their team explained that although vibrational bonding theories proved to be correct they found some inconsistencies with the 'classic model' and found that symmetric reactions will show resonance , but only in certain transition states . However, the classic model would still be viable to use to predict vibrational bonds. [ 2 ]
In 1989, Donald Fleming noticed that a reaction between bromine and muonium slowed down as temperature increased. This phenomenon was known as a "vibrational bond" and would capture the attention of Donald Fleming again in 2014. In 1989 the technology did not exist to collect sufficient data on the reaction, and Donald Fleming and his team moved away from the research. [ 3 ]
Donald Fleming and his team recently began their investigation of vibrational bonds, and as they had expected from the results of their experiments in 1989, the BrLBr reaction slowed at high temperatures, now using modern instrumental analysis from photo detachment electron spectroscopy , the vibrational bond was detected but lasted only a few milliseconds. [ 4 ] The vibrational bond acted differently than van der Waals forces reactions because the energy was balanced differently.
In chemistry it is known that increased temperature increases the rate or reaction of an experiment, however vibrational bonds are not formed like covalent bonds where electrons are shared between the two bonding atoms. Vibrational bonds are created at high energy where the muonium bounces to and from bromine atoms "like a ping pong ball bouncing between two bowling balls," according to Donald Fleming. [ 5 ] This bouncing action lowers the potential energy of the BrMuBr molecule, and therefore slows the rate of the reaction. [ 3 ]
This type of bond has been confirmed in the BrMuBr molecules but in the heavier isotopes of hydrogen ( protium , deuterium , and tritium ), the vibrational bonding can only occur as the van der Waals forces are overcome and the vibrational bond is formed.
This discovery changes the understanding of chemical bonds, with van der Waals interactions, and recently discovered vibrational bonding will show that there are different mechanisms and energies for different bonds, and the experimental discovery of the vibrational bonding has the potential to encourage more research in isotopic interactions. [ 3 ]
|
https://en.wikipedia.org/wiki/Vibrational_bond
|
Vibrational circular dichroism ( VCD ) is a spectroscopic technique which detects differences in attenuation of left and right circularly polarized light passing through a sample. It is the extension of circular dichroism spectroscopy into the infrared and near infrared ranges. [ 1 ]
Because VCD is sensitive to the mutual orientation of distinct groups in a molecule, it provides three-dimensional structural information. Thus, it is a powerful technique as VCD spectra of enantiomers can be simulated using ab initio calculations, thereby allowing the identification of absolute configurations of small molecules in solution from VCD spectra. Among such quantum computations of VCD spectra resulting from the chiral properties of small organic molecules are those based on density functional theory (DFT) and gauge-including atomic orbitals (GIAO). As a simple example of the experimental results that were obtained by VCD are the spectral data obtained within the carbon-hydrogen (C-H) stretching region of 21 amino acids in heavy water solutions. Measurements of vibrational optical activity (VOA) have thus numerous applications, not only for small molecules, but also for large and complex biopolymers such as muscle proteins ( myosin , for example) and DNA .
While the fundamental quantity associated with the infrared absorption is the dipole strength, the differential absorption is also proportional to the rotational strength, a quantity which depends on both the electric and magnetic dipole transition moments. Sensitivity of the handedness of a molecule toward circularly polarized light results from the form of the rotational strength. A rigorous theoretical development of VCD was developed concurrently by the late Professor P.J. Stephens, FRS, at the University of Southern California , [ 2 ] [ 3 ] and the group of Professor A.D. Buckingham, FRS, at Cambridge University in the UK, [ 4 ] and first implemented analytically in the Cambridge Analytical Derivative Package (CADPAC) by R.D. Amos. [ 5 ] Previous developments by D.P. Craig and T. Thirmachandiman at the Australian National University [ 6 ] and Larry A. Nafie and Teresa B. Freedman at Syracuse University [ 7 ] though theoretically correct, were not able to be straightforwardly implemented, which prevented their use. Only with the development of the Stephens formalism as implemented in CADPAC did a fast efficient and theoretically rigorous theoretical calculation of the VCD spectra of chiral molecules become feasible. This also stimulated the commercialization of VCD instruments by Biotools, Bruker, Jasco and Thermo-Nicolet (now Thermo-Fisher).
Extensive VCD studies have been reported for both polypeptides and several proteins in solution; [ 8 ] [ 9 ] [ 10 ] several recent reviews were also compiled. [ 11 ] [ 12 ] [ 13 ] [ 14 ] An extensive but not comprehensive VCD publications list is also provided in the "References" section. The published reports over the last 22 years have established VCD as a powerful technique with improved results over those previously obtained by visible/UV circular dichroism (CD) or optical rotatory dispersion (ORD) for proteins and nucleic acids.
The effects due to solvent on stabilizing the structures (conformers and zwitterionic species) of amino acids and peptides and the corresponding effects seen in the vibrational circular dichroism (VCD) and Raman optical activity spectra (ROA) have been recently documented by a combined theoretical and experimental work on L-alanine and N-acetyl L-alanine N'-methylamide. [ 15 ] [ 16 ] Similar effects have also been seen in the nuclear magnetic resonance (NMR) spectra by the Weise and Weisshaar NMR groups at the University of Wisconsin–Madison . [ 17 ]
VCD spectra of nucleotides, synthetic polynucleotides and several nucleic acids, including DNA, have been reported and assigned in terms of the type and number of helices present in A-, B-, and Z-DNA.
VCD can be regarded as a relatively recent technique. Although Vibrational Optical Activity and in particular Vibrational Circular Dichroism, has been known for a long time, the first VCD instrument was developed in 1973 [ 18 ] and commercial instruments were available only since 1997. [ 19 ]
For biopolymers such as proteins and nucleic acids, the difference in absorbance between the levo- and dextro- configurations is five orders of magnitude smaller than the
corresponding (unpolarized) absorbance. Therefore, VCD of biopolymers requires the use of very sensitive, specially built instrumentation as well as time-averaging over relatively long intervals of time even with such sensitive VCD spectrometers.
Most CD instruments produce left- and right- circularly polarized light which is then either sine-wave or square-wave modulated, with subsequent phase-sensitive detection and lock-in amplification of the detected signal. In the case of FT-VCD,
a photo-elastic modulator (PEM) is employed in conjunction with an FTIR interferometer set-up. An example is that of a Bomem model MB-100 FTIR interferometer equipped with additional polarizing optics/ accessories needed for recording VCD spectra.
A parallel beam emerges through a side port of the interferometer which passes first through a wire grid linear polarizer and then through an octagonal-shaped ZnSe crystal PEM which modulates the polarized beam at a fixed, lower frequency such as 37.5 kHz. A mechanically stressed crystal such as ZnSe exhibits birefringence when stressed by an adjacent piezoelectric transducer. The linear polarizer is positioned close to, and at 45 degrees, with respect to the ZnSe crystal axis. The polarized radiation focused onto the detector is doubly modulated, both by the PEM and by the interferometer setup. A very low noise detector, such as MCT (HgCdTe), is also selected for the VCD signal phase-sensitive detection. The first dedicated VCD spectrometer brought to market was the ChiralIR from Bomem/BioTools, Inc. in 1997. Today, Thermo-Electron, Bruker, Jasco and BioTools offer either VCD accessories or stand-alone instrumentation. [ 20 ] To prevent detector saturation an appropriate, long wave pass filter is placed before the very low noise MCT detector, which allows only radiation below 1750 cm −1 to reach the MCT detector; the latter however measures radiation only down to 750 cm −1 . FT-VCD spectra accumulation of the selected sample solution is then carried out, digitized and stored by an in-line computer. Published reviews that compare various VCD methods are also available. [ 21 ] [ 22 ]
VCD spectra have also been reported in the presence of an applied external magnetic field. [ 23 ] This method can enhance the VCD spectral resolution for small molecules. [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ]
ROA is a technique complementary to VCD especially useful in the 50–1600 cm −1 spectral region; it is considered as the technique of choice for determining optical activity for photon energies less than 600 cm −1 .
|
https://en.wikipedia.org/wiki/Vibrational_circular_dichroism
|
Vibrational energy relaxation , or vibrational population relaxation , is a process in which the population distribution of molecules in quantum states of high energy level caused by an external perturbation returns to the Maxwell–Boltzmann distribution .
In solution , the process proceeds with intra- and intermolecular energy transfer. The excess energy of the excited vibrational mode is transferred to the kinetic modes in the same molecule or to the surrounding molecules. Through this process, the initially excited vibrational mode moves to a vibrational state of a lower energy. The relaxation is called the longitudinal relaxation, and the time constant of the relaxation is called the longitudinal relaxation time, or T 1 .
Vibrational energy relaxation has been studied with time-resolved spectroscopy . By the excitation of the pump pulse, the population distribution of the vibrationally excited state is made by infrared absorption or a Raman process when the molecule is in the electronic ground state . In addition, by the electronic transition, the molecule often moves to the vibrationally excited state of the electronic excited state. The process of the energy relaxation from these vibrationally excited states can be observed with the probe pulse, which is delayed from the pump pulse.
This molecular physics –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vibrational_energy_relaxation
|
Vibrational spectroscopic maps are a series of ab initio , semiempirical , or empirical models tailored to specific IR probes to describe vibrational solvatochromic effects on molecular spectra quantitatively. [ 1 ]
Coherent multidimensional spectroscopy, [ 2 ] [ 3 ] a nonlinear spectroscopy utilizing multiple time-delayed pulses, is a technique that enables the measurement of solvation-induced frequency shifts and the time-correlations of the fluctuating frequencies. Researchers employ various organic and biochemical methods to introduce small vibrational probes into molecular systems into a variety of chemicals, proteins , nucleic acids , etc. [ 4 ] These probes, labeled with infrared (IR) markers, were subject to spectroscopic investigations to obtain quantitative insights into various features of chemical and biological systems. In general, interpreting the experimental multidimensional spectra to get information on the underlying molecular processes requires theoretical modeling. [ 5 ]
The vibrational frequency shifts observed due to complex intermolecular interactions of small IR probes with surroundings in the condensed phase are minute, often representing fractions of thermal energy. Numerical accuracy associated with advanced quantum mechanical calculations are not sufficient to accurately model these shifts. [ 6 ] Consequently, researchers commonly resort to mapping procedures, which correlate certain physical variables calculated for the probe molecule with spectroscopic properties such as vibrational frequencies. These mapping procedures are referred to as vibrational spectroscopic maps within the field.
Typically, the physical variables employed in vibrational frequency maps include electric potentials , electric fields , distributed higher multipole moments , and other relevant factors evaluated at specific points surrounding the molecule.
As an example, the vibrational frequency associated with a localized vibrational mode is correlated with the electrostatic potential and electric field values at a designated set of points known as distributed sites within the infrared (IR) chromophore. [ 7 ]
The vibrational frequency shift, denoted as Δ ω j {\displaystyle \Delta \omega _{j}} , for the j th normal mode of a given probe molecule is defined as the difference between the actual vibrational frequency ω j {\displaystyle \omega _{j}} of the mode in a solution and the frequency ω j , 0 {\displaystyle \omega _{j,0}} in the gas phase.
Δ ω ≡ ω j − ω j , 0 {\displaystyle \Delta \omega \equiv \omega _{j}-\omega _{j,0}}
ref name=":2"> Buckingham, A. D. (1960). "Solvent effects in vibrational spectroscopy" . Transactions of the Faraday Society . 56 : 753. doi : 10.1039/tf9605600753 . ISSN 0014-7672 . </ref> [ 8 ] [ 9 ] [ 10 ] [ 11 ]
From an effective Hamiltonian for the solute in the presence of molecular environment, one can derive the effective vibrational force constant (or Hessian ) matrix approximately as follows: [ 11 ] [ 12 ]
k j k ≈ M j ω j 2 δ j k + ∂ 2 U ( Q ) ∂ Q j ∂ Q k | 0 − ∑ i q i j k M i ω i 2 ∂ U ( Q ) ∂ Q i | 0 {\displaystyle k_{jk}\approx M_{j}\omega _{j}^{2}\delta _{jk}+{\frac {\partial ^{2}U(\mathbf {Q} )}{\partial Q_{j}\partial Q_{k}}}{\bigg \vert }_{0}-\sum _{i}{\frac {q_{ijk}}{M_{i}\omega _{i}^{2}}}{\frac {\partial U(Q)}{\partial Q_{i}}}{\bigg \vert }_{0}}
where the subscript 0 means the quantity is evaluated at the gas-phase geometry.
In the limiting case that the vibrational couplings of the normal mode of interest with other vibrational modes are relatively weak, the vibrational frequency shift under such a weak coupling approximation (WCA) in solution from the gas-phase frequency is given by [ 13 ] [ 11 ]
Δ ω j W C A = [ F ^ j E A + F ^ j M A ] U ( Q ) | 0 {\displaystyle \Delta \omega _{j}^{WCA}=[{\hat {F}}_{j}^{EA}+{\hat {F}}_{j}^{MA}]U(\mathbf {Q} ){\bigg \vert }_{0}}
Here, F ^ j E A {\displaystyle {\hat {F}}_{j}^{EA}} and F ^ j M A {\displaystyle {\hat {F}}_{j}^{MA}} are the electric anharmonicity (EA) and mechanical anharmonicity (MA) operators, respectively. These operators are defined as
F ^ j E A = 1 2 M j ω j ∂ 2 ∂ Q j 2 {\displaystyle {\hat {F}}_{j}^{EA}={\frac {1}{2M_{j}\omega _{j}}}{\frac {\partial ^{2}}{\partial Q_{j}^{2}}}}
and
F ^ j M A = − 1 2 M j ω j ∑ i g i j j M i ω i 2 ∂ ∂ Q i {\displaystyle {\hat {F}}_{j}^{MA}=-{\frac {1}{2M_{j}\omega _{j}}}\sum _{i}{\frac {g_{ijj}}{M_{i}\omega _{i}^{2}}}{\frac {\partial }{\partial Q_{i}}}}
By substituting a relevant expression for the intermolecular interaction potential into the WCA expression for Δ ω j W C A {\displaystyle \Delta \omega _{j}^{WCA}} , one can derive the vibrational frequency shift based on the specific theoretical potential model under consideration.
While several rigorous theories for vibrational solvatochromism based on physical approximations have been proposed, [ 1 ] these sophisticated models often necessitate extensive quantum chemistry calculations performed at elevated levels of precision with a large basis set . Current electronic structure simulation methods fall short in providing vibrational frequencies directly comparable to experimentally measured frequency shifts, especially when they are on the order of a few wavenumbers. [ 14 ] [ 15 ]
To accurately calculate coefficients in vibrational solvatochromism expressions, researchers frequently turn to employing multivariate leastsquare fitting . This technique involves fitting a sufficiently extensive set of training data obtained from quantum chemistry calculations of vibrational frequency shifts for numerous clusters containing a solute and multiple solvent molecules.
An early approach aimed to express the solvation-induced vibrational frequency shift in terms of the solvent electric potentials evaluated at distributed atomic sites on the target solute molecule. [ 4 ] This method involves calculating the solvent electric potentials at these specific solute sites through the utilization of atomic partial charges from surrounding solvent molecules. The vibrational frequency shift of the solute molecule, denoted as Δ ω j ( Q ) {\displaystyle \Delta \omega _{j}(\mathbf {Q} )} , for the j th vibrational mode can be represented as [ 10 ] [ 7 ]
Δ ω j ( Q ) = ω j ( Q ) − ω j 0 = ∑ k = 1 N b j k ϕ k ( Q ) {\displaystyle \Delta \omega _{j}(\mathbf {Q} )=\omega _{j}(\mathbf {Q} )-\omega _{j0}=\sum _{k=1}^{N}b_{jk}\phi _{k}(\mathbf {Q} )}
Here, ω j ( Q ) {\displaystyle \omega _{j}(\mathbf {Q} )} represents the vibrational frequency of the j th normal mode in solution, ω j 0 {\displaystyle \omega _{j0}} signifies the vibrational frequency in the gas phase, N denotes the number of distributed sites on the solute molecule, ϕ k ( Q ) {\displaystyle \phi _{k}(\mathbf {Q} )} denotes the solvent electric potential at the kth site of the solute molecule, and b j k {\displaystyle b_{jk}} are the parameters to be determined through least-square fitting to a training database comprising clusters containing a solute and multiple solvent molecules. This method provides a means to quantify the impact of solvation on the vibrational frequencies of the solute molecule.
Another widely used model for characterizing vibrational solvatochromic frequency shifts involves expressing the frequency shift in terms of solvent electric fields evaluated at distributed sites on the target solute molecule. [ 4 ] [ 16 ] [ 17 ]
Vibrational spectroscopic maps have been developed for a diverse range of vibrational modes, including various molecular systems and functional groups. [ 1 ] [ 4 ] Some of the notable vibrational modes include:
|
https://en.wikipedia.org/wiki/Vibrational_spectroscopic_map
|
The vibrational temperature is commonly used in thermodynamics , to simplify certain equations. It has units of temperature and is defined as
where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant , c {\displaystyle c} is the speed of light , ν ~ {\displaystyle {\tilde {\nu }}} is the wavenumber , and ν {\displaystyle \nu } (Greek letter nu) is the characteristic frequency of the oscillator.
The vibrational temperature is used commonly when finding the vibrational partition function .
This molecular physics –related article is a stub . You can help Wikipedia by expanding it .
This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vibrational_temperature
|
Vibratory Fluidized Bed (VFB) is a type of fluidized bed where the mechanical vibration enhances the performance of fluidization process. Since the first discovery of vibratory fluidized bed, its vibration properties proves to be more efficient in dealing with fine particles which appears to be very difficult to achieve with normal fluidized bed. Even though numerous publications and its popularity in industrial applications, the knowledge about vibratory dynamics and properties are very limited. Future research and development are needed to further improve this technology to bring it to another level.
Vibratory fluidized bed technology has been around since its first discovery in 1984 by Geldart, where he conducted an experiment to observe the behaviour of different types of particle groups behave when vibration mechanism are introduced to further fluidized the particles. [ 1 ] Although it has been around for the past 20 years, only a few research has been done to further improve this technology. Recently, the world is focusing on environmental friendly machinery for the sustainability of the earth. Therefore, more research has been conducted to study the effect of vibration in fluidisation because not only vibratory fluidized bed is environmental friendly, it is also cheaper compare to other fluidized bed.
Improvement over conventional fluidized bed technology has led to discovery of vibratory fluidized bed where the bed is design by combining vibration and gas flowing vertically towards the conveyor bed. It offers some advantages of fluidized bed, however the feed will move along the vibrating conveyor until they have dried adequately to break up and this will cause lower chance of agglomerates build-up in the feed; hence it is useful for processing group C particles which have small size of fine particles, into smaller agglomerates. [ 2 ] [ 3 ]
Vibratory fluidized beds are mainly used in several industries such as pharmaceutical, agricultural, catalyst , plastics, minerals, food processes. [ 4 ] [ 5 ] Typical applications for vibratory fluidized beds are drying products in the form of grains and crystals, cooling the dried products, agglomeration and granulation of coarse particles, and sterilising. [ 4 ] [ 6 ]
As mention above vibratory fluidized bed are mainly used in numerous industries where certain particle size are needed to be consistent without any defect to produce better product for their consumer. Most common process operations used in vibratory fluidized bed technology are dryers and coolers.
Standard type of vibratory fluidized dryer consist of vibrating tray conveyor where hot gases from the chamber will flow through the holes within the tray and come in contact with the materials to be dried. The tray area is big enough to tolerate constant flow of material through the bed and passed along the deck with a low depth on the tray. The vibrations to the deck are directed in vertical component to assist in fluidization of the material whereas the horizontal component of the vibration, support in transporting materials along the tray. [ 5 ]
Vibratory fluidized coolers operate in the same manner but instead of hot gases being feed from the chamber, it have recirculating air flowing through the chamber and equipped with atomizing nozzle to generate water mist as cooling medium. Other alternate designs include the use of cold water coils with the inlet air passing over them and this option are used when the incoming air have a large temperature difference compare to the material being cooled. [ 7 ]
Some of the advantages of vibratory fluidized beds include: [ 4 ] [ 7 ] [ 8 ]
Limitations of vibratory fluidized bed are as follow: [ 7 ] [ 9 ] [ 10 ]
To give a more detail insight into the vibrating fluidized bed, several characteristics have been stated below to show the relationship amongst the characteristics as well as the operating conditions to how they could possibly affect some process conducted using a vibrating fluidized bed.
The term voidage refers to the spacing between the materials. It is critical to know how the voidage behaviour of certain particle sizes affects the process in a vibrating fluidized bed as they are one of the key factors to be considered when designing and scaling up the vibrating fluidized bed from lab scale to industrial scale. From several experiments conducted, it was shown that vibration helps in the fluidization of particles as the axial and radial voidage distribution become more homogeneous. This is especially true for vibrating fluidized beds with large vibration amplitudes. It was also found that with increasing bed height, the layers of particles in the bed could be damped out by the vibration energy. Analysis of the wave propagation showed that its parameters are affected by the fluidization behaviour. [ 11 ]
In a vibrating fluidized bed, energy is transferred when the vibrating wall comes into contact with the particles. These particles collide with other particles in the bed which passes kinetic energy in the form of wave propagation throughout the vibrating fluidized bed. The magnitude of the energy transferred is relative to the amplitude. This is because of the oscillations caused by the wave reflection of the medium boundary in the vibrating fluidized bed. [ 12 ]
To assess the bubbling behaviour of the vibrating fluidized bed, factors such as the size of the bubble and its velocity were also taken into account. For various vibration amplitudes and frequencies, numerical simulations of the vibrating fluidized bed was conducted to better understand the behaviour of the bubbles under the vibrating conditions. The results showed that due to the oscillatory displacement of the vibrating fluidized bed causes the mean bubble diameter to increase but lowers the acceleration rate of the bubbles. Thus, it was concluded that bubbling behaviour in a vibrating fluidized bed is dependent on the vibrations. [ 13 ]
To consider multicomponent moisture solid in a vibrating fluidized bed drier, a model was used to assess the characteristics of drying a thin layer of particle which was wetted with a multicomponent mixture. This was done to gain a better understanding of the complex treatment of multicomponent drying which is tedious and time-consuming process. Based on the model using a plug flow of solids, the selectivity and best drying conditions to achieve the ideal final moisture composition were determined. For a component mixture which is highly volatile, the composition of the liquid which was left in the product from the vibrating fluidized bed can be controlled using small amount of the other components to the solid feed. [ 5 ]
Knowing that one of the advantages of the vibrating fluidized bed is its small pressure drop, several studies has been made to show that for a given operating condition range, the pressure drop of the vibrating bed when compared to a conventional one is much smaller. This is also the case when comparing the minimum fluidization pressure drop as the vibration decreases due to the increase in amplitude and decrease in frequency. [ 14 ] The presence of this pressure drop across the vibrating fluidized bed has a large impact on the heat and mass transfer in the process. There is an increase in bed porosity which corresponds to the pressure loss reduction. This change in pressure loss is dependent on the frequency and amplitude of the vibration of the surface. [ 15 ]
The height of the bed for a vibrating fluidized bed is also an important characteristic as it affects a few other parameters as well. From previous research, it was found that for a vibrating fluidized bed, the minimum fluidization velocity is affected by bed height. Apart from that, changes in the height of the bed for a vibrating fluidized bed also affects fluidization behaviour and flow dynamics as well. By increasing the static bed height, there was an increase in solid concentration in the centre part of the vibrating fluidized bed. [ 16 ]
When first designing the vibrating fluidized bed, certain heuristics were followed so that the designs of the vibrating fluidized bed could be best suited for the desired process as well as knowing the optimal operating conditions to be used. Some of the heuristics are:
After the first few fluidized beds were successfully applied into industrial processes, demand for more types of fluidization technology has rose to satisfy the growing industrial demand. The addition of vibratory mechanism to the fluidized bed in 1984 where Geldart [ 1 ] showed that using mechanical vibrating sieve can improve the performance of fluidising small size of fine particles. These experiments are difficult to process these powders by fluidisation due to the unpredictable behaviours of particles. It was later found that it would be cheaper and more environmentally friendly by adding vibration into the fluidization process. This was then used as a starting point by many others for further fluidization research based on the effects of vibrations. Mujumdar (1988) [ 17 ] devised two methods using vibration technic of fluidization for fluidizing hot-sensitive and paste-like materials. Yoshihide et al. (2003) [ 18 ] studied the effect of vibration on fluidisation behavior and prediction of minimum fluidisation velocity. Kaliyaperumal et al. (2011) [ 19 ] determined the effect of different vibration to the nano and sub-micro particles, those particles are hard to fluidise in the absence of mechanical vibration and have special properties.
As mentioned before, one way to determine the best operating conditions would be creating a mathematical model or process model using software to simulate the vibrating fluidized bed for the desired process.
The effects of gas velocity and temperature were modelled. One optimal operating condition would be increasing the drying rate. This is because with increased drying rate, the drying process in the vibrating fluidized bed will be shorter giving the vibrating fluidized bed an overall better efficiency. There are 3 major mechanisms which determine the drying rate. The mechanisms are the heat & mass transfer in the gas side, the thermodynamic equilibrium in between the two phases during contact and the heat & mass transfer within the wet solid. These three mechanisms will increase with the increase in gas velocity as well as the heat & mass transfer coefficient . This will then cause the drying rate to increase because of the increase in gas temperature which causes the gas humidity to decrease. [ 20 ] The effects of particle size were modelled as well. It was found that. Larger particle need longer time to dry to reach the same moisture content due to increase in resistance within the particles against heat and mass transfer. Since the resistance against heat transfer within the particle is lower than the resistance against mass transfer; the convection heat not used to vaporize water is used to elevate the material temperature which will lead to higher moisture transfer coefficients within the particles and will cause higher drying rate. Therefore, it was concluded that for optimum operating conditions to be achieved, the particles which are fed into the vibratory fluidized bed should be decreased. Usually the particle size of the feed material is not a controlled parameter unless methods such as grinding are used but doing so would involve extra operating cost which should be avoided. Hence, another option would be by increasing the intensity of the vibrations in the vibrating fluidized bed. [ 20 ]
One of the final parts of the heuristics would be the scaling-up of the vibrating fluidized bed from laboratory scale to industrial scale. There are some factors which should be taken into consideration when proceeding with the scaling-up. One would be the energy consumption of an industrial scale vibrating fluidized bed. This is because a potential customer would want to know the requirements of the process. Therefore, individual energy consumption for each part of the vibrating fluidized bed should be taken into account. [ 21 ] The same can be said about the vibrating fluidized bed when we look at it from an economical perspective. Most buyers of the vibrating fluidized bed would most likely use it for a process to achieve an income profit. Hence, a detail cost analysis should be done,. [ 21 ] From an environmental point of view, there is not much to be worried off except for possible safety issues, because the vibrating fluidized bed itself is generally considered to be environmentally friendly because the waste produced are already treated in the process. Lastly, not forgetting the characteristics which may cause an effect when scaling-up such as voidage behaviour on particle size as mentioned earlier. [ 11 ]
For vibratory fluidized bed, the common waste products include ash, dust and small solid particles produced by materials contacting / heating. The inlet gas and overflow from a fluidized bed usually has to be cleaned due to environmental issues. The waste stream also contains large amount of product we interested in and need to be recovered. This process could be achieved by simple separation techniques such as gas cyclones, bag house and scrubbers.
Gas cyclone is a device to separate small solid particles from suspension in a gas. By feeding gas tangentially into the cyclone body, high speed rotating flow established a centrifugal force and creates vortexes of particles. [ 22 ] Different cyclones have different specification and characteristics. Generally, larger than 100 μm or denser particles, which have more inertia, are pushed towards the wall and sink to the bottom of the cyclone, exit via the underflow. This part of solid will be collected as product of fluidized bed.
If process required, multiple cyclones can operated in parallel to increase efficiency or in series to increase recovery. Overflow contains gas and small amount of ash and dust, it usually be deposited into the air or feed into a Bag house for further treatment.
A baghouse is an air pollution control device designed to filtrate out particles from air or another gas by using engineered fabric filter tubes. Different baghouse cleaning methods can applied to different applications. The general principle is to use heat or pressure to pulse air through top of the fabric filter material to detach the collected particles from the bags. "Fines" particles such as ash and dust will be filtrated out and collected into a fines discharge box. Alternatively, the fines can be reintroduced into the original product stream with a "blow-through" type rotary valve. The cleaned gas will be deposited into atmosphere by industrial exhaust fan and stack.
A scrubber is also an air pollution control device. Compared to baghouse, a scrubber injects a dry reagent or slurry into dirty feed gas, via the contact of target materials to remove pollution. Depending on the properties of the compound, different pollutants correspond to different scrubbing techniques and reagents. For ash and dust, water can be used as a scrubbing solution.
|
https://en.wikipedia.org/wiki/Vibratory_fluidized_bed
|
Vibratory shear enhanced process (VSEP) is a membrane separation technology platform invented in 1987 and patented in 1989 by Dr. J. Brad Culkin . [ 1 ] VSEP's vibration system was designed to prevent membrane fouling, or the build-up of solid particles on the surface of the membrane. VSEP systems have been applied in a variety of industrial environments. [ 2 ]
After earning his PhD in chemical engineering from Northwestern University [ 3 ] Dr. Culkin spent his early professional career with Dorr–Oliver, Inc. , a pioneering company in the area of separation processes . [ 4 ] Culkin contributed to six Dorr–Oliver patent applications in 1985 and 1986. [ 5 ]
While at Dorr–Oliver, Dr. Culkin was exposed to the advantages of membrane separation technology as well as its failings. The membrane's Achilles' heel , Culkin decided, was fouling . [ 6 ]
Concurrent with his membrane work, Culkin was helping to develop a mechanically resonating loudspeaker with the founders of Velodyne Acoustics . [ 7 ] Culkin married these two areas of expertise and struck out to overcome membrane fouling through the use of vibration .
The first VSEP prototype Culkin developed was a literal combination of loudspeaker and membrane technology [ 8 ] as the photo shows below.
A VSEP filter uses oscillatory vibration to create high shear at the surface of the filter membrane. This high shear force significantly improves the filter's resistance to fouling thereby enabling high throughputs and minimizing reject volumes. VSEP feed stream are split into two products—a permeate stream with little or no solids and a concentrate stream with a solids concentration much higher than that of the original feed stream. [ 9 ] [ 10 ]
VSEP has been applied in a variety of industrial application areas including pulp and paper, chemical processing, landfill leachate , oil and gas, RO Reject and a variety of industrial wastewaters . [ 11 ] [ 2 ]
A VSEP system was recognized in 2009 as part of the WateReuse Foundation's Desalination Project of the Year. [ 12 ] The system was installed to minimize the brine from an electrodialysis reversal (EDR) system. [ 13 ]
|
https://en.wikipedia.org/wiki/Vibratory_shear-enhanced_process
|
Vibratory Stress Relief , often abbreviated VSR, is a non-thermal stress relief method used by the metal working industry to enhance the dimensional stability and mechanical integrity of castings , forgings , and welded components, chiefly for two categories of these metal workpieces:
This stress is called residual stress , [ 1 ] because it remains in a solid material after the original cause of the stress has been removed. Residual stresses can occur through a variety of mechanisms including inelastic (plastic) deformations, temperature gradients (during thermal cycle), or structural changes ( phase transformation ). For example, heat from welding may cause localized expansion, which is taken up during welding by either the molten metal or the placement of parts being welded. When the finished weldment cools, some areas cool and contract more than others, leaving residual stresses. These stresses often lead to distortion or warping of the structure during machining, assembly, testing, transport, field-use or over time. In extreme cases, residual stress can cause structural failure .
Almost all vibratory stress relief equipment manufacturers and procedures use the workpiece's own resonant frequency to boost the loading experienced by induced vibration, so to maximize the degree of stress relief achieved. Some equipment and procedures are designed to operate near, but not at, workpiece resonances (perhaps to extend equipment life). Although, independent research [ 2 ] has consistently shown resonant frequency vibration to be more effective. See references 4, 6, and 9.
The effectiveness of vibratory stress relief is highly questionable. [ 3 ] In general, the strain amplitudes achieved during vibratory stress relief are too low to exceed the critical stress required to activate mechanical relaxation during the induced low amplitude high cycle fatigue excitation of the transducer vibrations. If the strain amplitudes were increased to a level sufficient to cause instability in the residual stresses, fatigue damage would occur. [ 4 ] [ 5 ] For most applications, conventional stress relief methodologies should be applied to components that require the reduction of residual stresses. [ 6 ]
Effective vibratory stress relief treatment results from a combination of factors:
Each of these changes, which often combine, i.e., peak growth AND shifting, is consistent with a lowering of the rigidity of the workpiece. The workpiece rigidity is inflated by the presence of residual stress. In the example below, which depicts a common resonance pattern change that occurs during vibratory stress relief, the large peak grew by 47%, while simultaneously shifting to the left 28-RPM, which is less than 0.75%. See Figure 4.
The equipment used to perform this stress relief had vibrator speed regulation of ± 0.02%, and speed increment fine-tuning of 1-RPM, which allowed even subtle shifting of the peaks to be accurately tracked to their final, stable locale.
The pattern of change, i.e., how quickly the peaks grow and shift, is faster at the beginning of vibration treatment: As treatment continues, the rate of change decreases, eventually resulting in a new, stable resonance pattern. Stability of this new resonance pattern indicates that dimensional stability of the workpiece has been achieved.
The power plot is useful in both positioning and orienting the vibrator, and when adjusting the vibrator unbalance. Poor or inappropriate vibrator locations or orientations, or excessive vibrator unbalance settings, cause large peaks in the power plot. Use of higher-powered vibrator motors (above 2-kW) provides more "head-room" for peaks in power to be tolerated, and treatment to take place, which was the case here: The power peak at ≈ 3700-RPM was only half of the vibrator motor's 2.3-kW power capacity (top of the power scale).
A Pre-Treatment Scan , which functions as a base-line, is first recorded in green. The operator uses this green data set to tune upon the resonances, and monitor the growth and shifting of the resonance peaks. After peak growth and shifting have subsided, a Post-Treatment Scan is made (red). This data is superimposed on the original, green, Pre-Treatment Scan data, documenting the changes in resonance pattern. The stress relief treatment resulted in 47% growth of the original, large peak, while it shifted to the left 28-RPM (less than 0.75%).
After stress relief treatment, the braces (rust-colored, structural beams), which are used to maintain the desired shape during welding, were removed. The spacing between the two "arms" remained the same; no change was detectable (measured to 1/32" or less than 1 mm), and the spacing remained so throughout assembly, testing (to 60 ton test loads), transport, and installation.
VSR is not accepted by the Engineering community at large as a viable method of relaxing or reducing residual stresses in components that require it. For general use, conventional residual stress relaxation methodologies are recommended. [ 9 ]
Historically, the first type of stress relief was performed on castings by storing them outside for months or even years. This was referred to as curing , a term used for long-term storage of freshly hewn wood. Fresh castings were referred to as being green, meaning, they were prone to distortion during precision machining, just as green wood bows during cutting.
Later, thermal stress relief (TSR) was developed to alleviate the lengthy time requirements of curing. It has been known for many years, however, that TSR has limitations or shortcomings, specifically:
Metal components, whose function would be enhanced by stress relief, and fall into one or more of the above categories, are strong candidates for VSR for quality-related reasons.
Further, there is a strong economic incentive to use vibratory stress relief on large workpieces, since stress relief using a furnace (thermal stress relief or TSR) is highly energy-intensive; consuming much natural gas , and hence, producing much CO 2 . The cost of TSR is approximately proportional to a metal component's weight or overall size, estimated to be US$ 2,500 for the structure pictured, plus transportation costs, which might involve special transport permits, to and from a furnace. VSR Treatment would cost a company owning appropriate equipment less than 15% as much ( ≈ $400 ) as TSR Treatment, chiefly amortization of equipment investment plus labor, and a modest amount of electrical consumption, and treatment would take less than two hours, with no transport required. However, the lack of independent data to show that this technique is effective may mean that even that lesser investment is not of any value, so use of VSR should evaluated very carefully before proceeding.
PDF D. Rao, J. Ge, and L. Chen, Vibratory Stress Relief in the Manufacturing the Rails of a Maglev System, J. of Manufacturing Science and Engineering, 126, Issue 2, 388-391 (2004)
PDF B.B. Klauba, C.M. Adams, J.T. Berry, Vibratory Stress Relief: Methods Used to Monitor and Document Effective Treatment, A Survey of Users, and Directions for Further Research, Proc. of ASM, 7th International Conference: Trends in Welding Research 601-606 (2005)
PDF Y. Yang, G. Jung, and R. Yancey, Finite Element Modeling of Vibratory Stress Relief after Welding, Proc of ASM, 7th International Conference; Trends in Welding Research 547-552 (2005)
|
https://en.wikipedia.org/wiki/Vibratory_stress_relief
|
Vibrion may also refer to: the singular form of vibrio , a genus of anaerobic bacteria with a comma-like shape. Vibrion is an antiquated term for microorganisms , especially pathogenic ones; see Germ theory of disease . The term was specifically used in reference to motile microorganisms, and the name of the genus Vibrio derives from this term. The term is closely tied to the history of the study of cholera . It was used in biological literature between the late 19th century and the 1920s.
Bacteria with the same characteristics as those of the genus Vibrio were discovered independently multiple times, but only later findings were able to connect these bacteria with cholera , tetanus , and other diseases.
Leeuwenhoek may have observed Vibrio bacteria after his discovery of “ animalcules ” described in his letters to the Royal Society . [ 1 ] He described microorganisms with the same appearances and behaviors as bacteria belonging to the genus Vibrio . [ 1 ] [ 2 ] Bacteria of this genus were later anonymously described as “Capillary Eels” in the 1703 issue of Philosophical Transactions by a “Sir C. H.” because of their thin, wormlike appearance. [ 2 ] Additionally, the naturalist O. F. Müller documented eight species of the genus Vibrio in his work on infusoria . [ 3 ]
In 1854, the Italian anatomist Filippo Pacini coined the term "vibrions" in a paper he published during the third Cholera pandemic arguing that they were the main agents causing cholera. [ 4 ] He drew his conclusion from his observations of the thin, wormlike bacteria present in the blood and stool of cholera patients, especially characteristic of late-stage infections. [ 5 ] The Vibrio cholerae identified by Pacini were rediscovered by Robert Koch in 1884, who was unaware of Pacini's work; he called them “Comma Bacillus” and received worldwide fame as a result of his discovery. [ 6 ]
The term “vibrion” was subsequently used by Louis Pasteur in 1861 in naming a bacterium he discovered, Vibryon butyrique , which was capable of surviving in an environment without oxygen. [ 7 ] This bacterium was then identified as the same bacterium which had been discovered by two other scientists and renamed Clostridium butyricum . [ 8 ] [ 7 ]
By the 20th century "vibrion" came to be used as a general term for motile microorganisms with an elongated, wormlike shape associated with pathogenic illnesses such as cholera and tetanus . It was also incorporated in the names created for several bacteria by microbiologists at the time, such as in the name " Vibrion septique " from a 1922 paper in The Journal of Medical Research. [ 9 ] In an issue of the journal Modern Medicine from 1893, the term "cholera vibrion" is used to refer to Vibrio cholerae . [ 10 ] In the same journal from 1893, the term "vibrion" is said to be dated, which highlights the brevity of the time period in which the word was used. [ 10 ]
The term "vibrion" was out of use by the late 1920s and does not appear on its own in subsequent biological literature. This is largely due to the more extensive development of bacterial taxonomy towards the turn of the 19th century, which gave bacteriologists a more specific way to classify microorganisms. The term "vibrion" was adapted into the name of the Vibrio prokaryotic genus.
|
https://en.wikipedia.org/wiki/Vibrion
|
Vibronic coupling (also called nonadiabatic coupling or derivative coupling ) in a molecule involves the interaction between electronic and nuclear vibrational motion. [ 1 ] [ 2 ] The term "vibronic" originates from the combination of the terms "vibrational" and "electronic", denoting the idea that in a molecule, vibrational and electronic interactions are interrelated and influence each other. The magnitude of vibronic coupling reflects the degree of such interrelation.
In theoretical chemistry , the vibronic coupling is neglected within the Born–Oppenheimer approximation . Vibronic couplings are crucial to the understanding of nonadiabatic processes, especially near points of conical intersections . [ 3 ] [ 4 ] The direct calculation of vibronic couplings used to be uncommon due to difficulties associated with its evaluation, but has recently gained popularity due to increased interest in the quantitative prediction of internal conversion rates, as well as the development of cheap but rigorous ways to analytically calculate the vibronic couplings, especially at the TDDFT level. [ 5 ] [ 6 ] [ 7 ]
Vibronic coupling describes the mixing of different electronic states as a result of small vibrations.
The evaluation of vibronic coupling often involves complex mathematical treatment.
The form of vibronic coupling is essentially the derivative of the wave function . Each component of the vibronic coupling vector can be calculated with numerical differentiation methods using wave functions at displaced geometries. This is the procedure used in MOLPRO . [ 8 ]
First order accuracy can be achieved with forward difference formula:
Second order accuracy can be achieved with central difference formula:
Here, e l {\displaystyle \mathbf {e} _{l}} is a unit vector along direction l {\displaystyle l} . γ k ′ k {\displaystyle \gamma ^{k'k}} is the transition density between the two electronic states.
Evaluation of electronic wave functions for both electronic states are required at N displacement geometries for first order accuracy and 2*N displacements to achieve second order accuracy, where N is the number of nuclear degrees of freedom. This can be extremely computationally demanding for large molecules.
As with other numerical differentiation methods, the evaluation of nonadiabatic coupling vector with this method is numerically unstable, limiting the accuracy of the result. Moreover, the calculation of the two transition densities in the numerator are not straightforward. The wave functions of both electronic states are expanded with Slater determinants or configuration state functions (CSF). The contribution from the change of CSF basis is too demanding to evaluate using numerical method, and is usually ignored by employing an approximate diabatic CSF basis. This will also cause further inaccuracy of the calculated coupling vector, although this error is usually tolerable.
Evaluating derivative couplings with analytic gradient methods has the advantage of high accuracy and very low cost, usually much cheaper than one single point calculation. This means an acceleration factor of 2N. However, the process involves intense mathematical treatment and programming. As a result, few programs have currently implemented analytic evaluation of vibronic couplings at wave function theory levels. Details about this method can be found in ref. [ 9 ] For the implementation for SA-MCSCF and MRCI in COLUMBUS , please see ref. [ 10 ]
The computational cost of evaluating the vibronic coupling using (multireference) wave function theory has led to the idea of evaluating them at the TDDFT level, which indirectly describes the excited states of a system without describing its excited state wave functions. However, the derivation of the TDDFT vibronic coupling theory is not trivial, since there are no electronic wave functions in TDDFT that are available for plugging into the defining equation of the vibronic coupling. [ 5 ]
In 2000, Chernyak and Mukamel [ 11 ] showed that in the complete basis set (CBS) limit, knowledge of the reduced transition density matrix between a pair of states (both at the unperturbed geometry) suffices to determine the vibronic couplings between them. The vibronic couplings between two electronic states are given by contracting their reduced transition density matrix with the geometric derivatives of the nuclear attraction operator, followed by dividing by the energy difference of the two electronic states:
This enables one to calculate the vibronic couplings at the TDDFT level, since although TDDFT does not give excited state wave functions, it does give reduced transition density matrices, not only between the ground state and an excited state, but also between two excited states. The proof of the Chernyak-Mukamel formula is straightforward and involves the Hellmann-Feynman theorem . While the formula provides useful accuracy for a plane-wave basis (see e.g. ref. [ 12 ] ), it converges extremely slowly with respect to the basis set if an atomic orbital basis set is used, due to the neglect of the Pulay force . Therefore, modern implementations in molecular codes typically use expressions that include the Pulay force contributions, derived from the Lagrangian formalism. [ 5 ] [ 6 ] [ 7 ] They are more expensive than the Chernyak-Mukamel formula, but still much cheaper than the vibronic couplings at wave function theory levels (more specifically, they are roughly as expensive as the SCF gradient for ground state-excited state vibronic couplings, and as expensive as the TDDFT gradient for excited state-excited state vibronic couplings). Moreover, they are much more accurate than the Chernyak-Mukamel formula for realistically sized atomic orbital basis sets. [ 5 ]
In programs where even the Chernyak-Mukamel formula is not implemented, there exists a third way to calculate the vibronic couplings, which gives the same results as the Chernyak-Mukamel formula. The key observation is that the contribution of an atom to the Chernyak-Mukamel vibronic coupling can be expressed as the nuclear charge of the atom times the electric field generated by the transition density (the so-called transition electric field), evaluated at the position of that atom. Therefore, Chernyak-Mukamel vibronic couplings can in principle be calculated by any program that both supports TDDFT and can compute the electric field generated by an arbitrary electron density at an arbitrary position. This technique was used to compute vibronic couplings using early versions of Gaussian , before Gaussian implemented vibronic couplings with the Pulay term. [ 13 ]
Vibronic coupling is large in the case of two adiabatic potential energy surfaces coming close to each other (that is, when the energy gap between them is of the order of magnitude of one oscillation quantum). This happens in the neighbourhood of an avoided crossing of potential energy surfaces corresponding to distinct electronic states of the same spin symmetry. At the vicinity of conical intersections , where the potential energy surfaces of the same spin symmetry cross, the magnitude of vibronic coupling approaches infinity. In either case the adiabatic or Born–Oppenheimer approximation fails and vibronic couplings have to be taken into account.
The large magnitude of vibronic coupling near avoided crossings and conical intersections allows wave functions to propagate from one adiabatic potential energy surface to another, giving rise to nonadiabatic phenomena such as radiationless decay . Therefore, one of the most important applications of vibronic couplings is the quantitative calculation of internal conversion rates, through e.g. nonadiabatic molecular dynamics [ 14 ] (including but not limited to surface hopping and path integral molecular dynamics ). When the potential energy surfaces of both the initial and the final electronic state are approximated by multidimensional harmonic oscillators, one can compute the internal conversion rate by evaluating the vibration correlation function, which is much cheaper than nonadiabatic molecular dynamics and is free from random noise; this gives a fast method to compute the rates of relatively slow internal conversion processes, for which nonadiabatic molecular dynamics methods are not affordable. [ 15 ]
The singularity of vibronic coupling at conical intersections is responsible for the existence of Geometric phase , which was discovered by Longuet-Higgins [ 16 ] in this context.
Although crucial to the understanding of nonadiabatic processes, direct evaluation of vibronic couplings has been very limited until very recently.
Evaluation of vibronic couplings is often associated with severe difficulties in mathematical formulation and program implementations. As a result, the algorithms to evaluate vibronic couplings at wave function theory levels, or between two excited states, are not yet implemented in many quantum chemistry program suites. By comparison, vibronic couplings between the ground state and an excited state at the TDDFT level, which are easy to formulate and cheap to calculate, are more widely available.
The evaluation of vibronic couplings typically requires correct description of at least two electronic states in regions where they are strongly coupled. This usually requires the use of multi-reference methods such as MCSCF and MRCI , which are computationally demanding and delicate quantum-chemical methods. However, there are also applications where vibronic couplings are needed but the relevant electronic states are not strongly coupled, for example when calculating slow internal conversion processes; in this case even methods like TDDFT, which fails near ground state-excited state conical intersections, [ 17 ] can give useful accuracy. Moreover, TDDFT can describe the vibronic coupling between two excited states in a qualitatively correct fashion, even if the two excited states are very close in energy and therefore strongly coupled (provided that the equation-of-motion (EOM) variant of the TDDFT vibronic coupling is used in place of the time-dependent perturbation theory (TDPT) variant [ 5 ] ). Therefore, the unsuitability of TDDFT for calculating ground state-excited state vibronic couplings near a ground state-excited state conical intersection can be bypassed by choosing a third state as the reference state of the TDDFT calculation (i.e. the ground state is treated like an excited state), leading to the popular approach of using spin-flip TDDFT to evaluate ground state-excited state vibronic couplings. [ 18 ] When even an approximate calculation is unrealistic, the magnitude of vibronic coupling is often introduced as an empirical parameter determined by reproducing experimental data.
Alternatively, one can avoid explicit use of derivative couplings by switch from the adiabatic to the diabatic representation of the potential energy surfaces . Although rigorous validation of a diabatic representation requires knowledge of vibronic coupling, it is often possible to construct such diabatic representations by referencing the continuity of physical quantities such as dipole moment, charge distribution or orbital occupations. However, such construction requires detailed knowledge of a molecular system and introduces significant arbitrariness. Diabatic representations constructed with different method can yield different results and the reliability of the result relies on the discretion of the researcher.
The first discussion of the effect of vibronic coupling on molecular spectra is given in the paper by Herzberg and Teller. [ 19 ] Calculations of the lower excited levels of benzene by Sklar in 1937 (with the valence bond method) and later in 1938 by Goeppert-Mayer and Sklar (with the molecular orbital method) demonstrated a correspondence between the theoretical predictions and experimental results of the benzene spectrum . The benzene spectrum was the first qualitative computation of the efficiencies of various vibrations at inducing intensity absorption. [ 20 ]
|
https://en.wikipedia.org/wiki/Vibronic_coupling
|
Vibronic spectroscopy is a branch of molecular spectroscopy concerned with vibronic transitions: the simultaneous changes in electronic and vibrational energy levels of a molecule due to the absorption or emission of a photon of the appropriate energy. In the gas phase , vibronic transitions are also accompanied by changes in rotational energy.
Vibronic spectra of diatomic molecules have been analysed in detail; [ 1 ] emission spectra are more complicated than absorption spectra . The intensity of allowed vibronic transitions is governed by the Franck–Condon principle . Vibronic spectroscopy may provide information, such as bond length , on electronic excited states of stable molecules . It has also been applied to the study of unstable molecules such as dicarbon (C 2 ) in discharges , flames and astronomical objects . [ 2 ] [ 3 ]
Electronic transitions are typically observed in the visible and ultraviolet regions, in the wavelength range approximately 200–700 nm (50,000–14,000 cm −1 ), whereas fundamental vibrations are observed below about 4000 cm −1 . [ note 1 ] When the electronic and vibrational energy changes are so different, vibronic coupling (mixing of electronic and vibrational wave functions ) can be neglected and the energy of a vibronic level can be taken as the sum of the electronic and vibrational (and rotational) energies; that is, the Born–Oppenheimer approximation applies. [ 4 ] The overall molecular energy depends not only on the electronic state but also on vibrational and rotational quantum numbers, denoted v and J respectively for diatomic molecules. It is conventional to add a double prime ( v ″, J ″) for levels of the electronic ground state and a single prime ( v ′, J ′) for electronically excited states.
Each electronic transition may show vibrational coarse structure, and for molecules in the gas phase, rotational fine structure. This is true even when the molecule has a zero dipole moment and therefore has no vibration-rotation infrared spectrum or pure rotational microwave spectrum. [ 5 ]
It is necessary to distinguish between absorption and emission spectra. With absorption the molecule starts in the ground electronic state, and usually also in the vibrational ground state v ″ = 0 because at ordinary temperatures the energy necessary for vibrational excitation is large compared to the average thermal energy. The molecule is excited to another electronic state and to many possible vibrational states v' = 0, 1, 2, 3, ... . With emission, the molecule can start in various populated vibrational states, and finishes in the electronic ground state in one of many populated vibrational levels. The emission spectrum is more complicated than the absorption spectrum of the same molecule because there are more changes in vibrational energy level.
For absorption spectra, the vibrational coarse structure for a given electronic transition forms a single progression , or series of transitions with a common level, here the lower level v ″ = 0 . [ 6 ] There are no selection rules for vibrational quantum numbers, which are zero in the ground vibrational level of the initial electronic ground state, but can take any integer values in the final electronic excited state. The term values G ( v ) for a harmonic oscillator are given by G ( v ) = ν ¯ electronic + ω e ( v + 1 2 ) {\displaystyle G(v)={\bar {\nu }}_{\text{electronic}}+\omega _{e}\left(v+{\tfrac {1}{2}}\right)\,} where v is a vibrational quantum number, and ω e is the harmonic wavenumber. In the next approximation the term values are given by G ( v ) = ν ¯ electronic + ω e ( v + 1 2 ) − ω e χ e ( v + 1 2 ) 2 {\displaystyle G(v)={\bar {\nu }}_{\text{electronic}}+\omega _{e}\left(v+{\tfrac {1}{2}}\right)-\omega _{e}\chi _{e}\left(v+{\tfrac {1}{2}}\right)^{2}\,} where χ e is an anharmonicity constant. This is, in effect, a better approximation to the Morse potential near the potential minimum. The spacing between adjacent vibrational lines decreases with increasing quantum number because of anharmonicity in the vibration. Eventually the separation decreases to zero when the molecule photo-dissociates into a continuum of states. The second formula is adequate for small values of the vibrational quantum number. For higher values further anharmonicity terms are needed as the molecule approaches the dissociation limit, at the energy corresponding to the upper (final state) potential curve at infinite internuclear distance.
The intensity of allowed vibronic transitions is governed by the Franck–Condon principle . [ 7 ] The intensity distribution within a progression is governed by the difference in the equilibrium bond lengths of the initial electronic ground state and the final electronic excited state of the molecule. In accordance with the Born-Oppenheimer approximation, where electronic motion is near instantaneous compared to nuclear motion, transitions between vibrational levels happen with essentially no change in nuclear coordinates between the ground and excited electronic states. These nuclear coordinates are referred to as classical "turning points", where the equilibrium bond lengths of the initial and final electronic states are equal. [ 8 ] These transitions can be represented as vertical lines between the various vibrational levels within electronic states on an energy level diagram.
It is generally true that the greater the changes to the bond length of a molecule upon excitation, the greater the contribution of vibrational states to a progression. The width of this progression itself is dependent on the range of transition energies available for internuclear distances close to the turning points of the initial vibration state. As the "well" of the potential energy curve of the final electronic state grows steeper, there are more final vibrational states available for transitions, and thus more energy levels to yield a wider spectrum.
Emission spectra are complicated due to the variety of processes through which electronically excited molecules can spontaneously return to lower energy states. [ 9 ] There is a tendency for molecules to undergo vibrational energy relaxation , where energy is lost non-radiatively from the Franck–Condon state (the vibrational state achieved after a vertical transition) to surroundings or to internal processes. The molecules can settle in the ground vibrational level of the excited electronic state, where they can continue to decay to various vibrational levels in the ground electronic state, before ultimately returning to the lowest vibrational level of the ground state. [ 10 ]
If emission occurs before vibrational relaxation can occur, then the resulting fluorescence is referred to as resonance fluorescence . In this case, the emission spectrum is identical to the absorbance spectrum. Resonance fluorescence, however, is not very common and is mainly observed in small molecules (such as diatomics) in the gas phase. This lack of prevalence is due to short radiative lifetimes of the excited state, during which energy can be lost. [ 11 ] Emission from the ground vibrational level of the excited state after vibrational relaxation is much more prevalent, referred to as relaxed fluorescence. Emission peaks for a molecule exhibiting relaxed fluorescence are found at longer wavelengths than the corresponding absorption spectra, with the difference being the Stokes shift of the molecule.
Vibronic spectra of diatomic molecules in the gas phase have been analyzed in detail. [ 12 ] Vibrational coarse structure can sometimes be observed in the spectra of molecules in liquid or solid phases and of molecules in solution. Related phenomena including photoelectron spectroscopy , resonance Raman spectroscopy , luminescence , and fluorescence are not discussed in this article, though they also involve vibronic transitions.
The vibronic spectra of diatomic molecules in the gas phase also show rotational fine structure. Each line in a vibrational progression will show P- and R-branches . For some electronic transitions there will also be a Q-branch. The transition energies, expressed in wavenumbers, of the lines for a particular vibronic transition are given, in the rigid rotor approximation, that is, ignoring centrifugal distortion , by [ 13 ] G ( J ′ , J ″ ) = ν ¯ v ′ − v ″ + B ′ J ′ ( J ′ + 1 ) − B ″ J ″ ( J ″ + 1 ) {\displaystyle G(J',J'')={\bar {\nu }}_{v'-v''}+B'J'(J'+1)-B''J''(J''+1)} Here B are rotational constants and J are rotational quantum numbers . (For B also, a double prime indicates the ground state and a single prime an electronically excited state.) The values of the rotational constants may differ appreciably because the bond length in the electronic excited state may be quite different from the bond length in the ground state, because of the operation of the Franck-Condon principle. The rotational constant is inversely proportional to the square of the bond length. Usually B ′ < B ″ as is true when an electron is promoted from a bonding orbital to an antibonding orbital , causing bond lengthening. But this is not always the case; if an electron is promoted from a non-bonding or antibonding orbital to a bonding orbital, there will be bond-shortening and B ′ > B ″ .
The treatment of rotational fine structure of vibronic transitions is similar to the treatment of rotation-vibration transitions and differs principally in the fact that the ground and excited states correspond to two different electronic states as well as to two different vibrational levels. For the P-branch J ′ = J ″ – 1 , so that ν ¯ P = ν ¯ v ′ − v ″ + B ′ ( J ″ − 1 ) J ″ − B ″ J ″ ( J ″ + 1 ) = ν ¯ v ′ − v ″ − ( B ′ + B ″ ) J ″ + ( B ′ − B ″ ) J ″ 2 {\displaystyle {\begin{aligned}{\bar {\nu }}_{P}&={\bar {\nu }}_{v'-v''}+B'(J''-1)J''-B''J''(J''+1)\\&={\bar {\nu }}_{v'-v''}-(B'+B'')J''+(B'-B''){J''}^{2}\end{aligned}}}
Similarly for the R-branch J ″ = J ′ – 1 , and ν ¯ R = ν ¯ v ′ − v ″ + B ′ J ′ ( J ′ + 1 ) − B ″ J ′ ( J ′ − 1 ) = ν ¯ v ′ − v ″ + ( B ′ + B ″ ) J ′ + ( B ′ − B ″ ) J ′ 2 {\displaystyle {\begin{aligned}{\bar {\nu }}_{R}&={\bar {\nu }}_{v'-v''}+B'J'(J'+1)-B''J'(J'-1)\\&={\bar {\nu }}_{v'-v''}+(B'+B'')J'+(B'-B''){J'}^{2}\end{aligned}}}
Thus, the wavenumbers of transitions in both P- and R-branches are given, to a first approximation, by the single formula [ 13 ] [ 14 ] ν ¯ P , R = ν ¯ v ′ , v ″ + ( B ′ + B ″ ) m + ( B ′ − B ″ ) m 2 , m = ± 1 , ± 2 e t c . {\displaystyle {\bar {\nu }}_{P,R}={\bar {\nu }}_{v',v''}+(B'+B'')m+(B'-B'')m^{2},\quad m=\pm 1,\pm 2\ etc.} Here positive m values refer to the R-branch (with m = + J ′ = J ″ + 1 ) and negative values refer to the P-branch (with m = – J ″ ). The wavenumbers of the lines in the P-branch, on the low wavenumber side of the band origin at ν ¯ v ′ , v ″ , {\displaystyle {\bar {\nu }}_{v',v''},} increase with m . In the R-branch, for the usual case that B ′ < B ″ , as J increases the wavenumbers at first lie increasingly on the high wavenumber side of the band origin but then start to decrease, eventually lying on the low wavenumber side. The Fortrat diagram illustrates this effect. [ note 2 ] In the rigid rotor approximation the line wavenumbers lie on a parabola which has a maximum at x = − B ′ + B ″ 2 ( B ′ − B ″ ) {\displaystyle x=-{\frac {B'+B''}{2(B'-B'')}}}
The line of highest wavenumber in the R-branch is known as the band head . It occurs at the value of m which is equal to the integer part of x , or of ( x + 1) .
When a Q-branch is allowed for a particular electronic transition, the lines of the Q-branch correspond to the case ∆ J = 0 , J ′ = J ″ and wavenumbers are given by [ 15 ] ν ¯ Q = ν ¯ v ′ , v ″ + ( B ′ − B ″ ) J ( J + 1 ) J = 1 , 2 , … {\displaystyle {\bar {\nu }}_{Q}={\bar {\nu }}_{v',v''}+(B'-B'')J(J+1)\quad J=1,2,\dots } The Q-branch then consists of a series of lines with increasing separation between adjacent lines as J increases. When B ′ < B ″ the Q-branch lies to lower wavenumbers relative to the vibrational line.
The phenomenon of predissociation occurs when an electronic transition results in dissociation of the molecule at an excitation energy less than the normal dissociation limit of the upper state. This can occur when the potential energy curve of the upper state crosses the curve for a repulsive state , so that the two states have equal energy at some internuclear distance. This allows the possibility of a radiationless transition to the repulsive state whose energy levels form a continuum, so that there is blurring of the particular vibrational band in the vibrational progression. [ 16 ]
The analysis of vibronic spectra of diatomic molecules provides information concerning both the ground electronic state and the excited electronic state. Data for the ground state can also be obtained by vibrational or pure rotational spectroscopy, but data for the excited state can only be obtained from the analysis of vibronic spectra. For example, the bond length in the excited state may be derived from the value of the rotational constant B ′.
In addition to stable diatomic molecules, vibronic spectroscopy has been used to study unstable species, including CH, NH, hydroxyl radical , OH, and cyano radical , CN. [ 17 ] The Swan bands in hydrocarbon flame spectra are a progression in the C–C stretching vibration of the dicarbon radical, C 2 for the d 3 Π u ⇔ a 3 Π g {\displaystyle d^{3}\Pi _{u}\Leftrightarrow a^{3}\Pi _{g}} electronic transition. [ 18 ] Vibronic bands for 9 other electronic transitions of C 2 have been observed in the infrared and ultraviolet regions. [ 2 ]
For polyatomic molecules, progressions are most often observed when the change in bond lengths upon electronic excitation coincides with the change due to a ″totally symmetric″ vibration. [ note 3 ] This is the same process that occurs in resonance Raman spectroscopy . For example, in formaldehyde (methanal), H 2 CO, the n → π* transition involves excitation of an electron from a non-bonding orbital to an antibonding pi orbital which weakens and lengthens the C–O bond. This produces a long progression in the C–O stretching vibration. [ 19 ] [ 20 ] Another example is furnished by benzene , C 6 H 6 . In both gas and liquid phase the band around 250 nm shows a progression in the symmetric ring-breathing vibration. [ 21 ]
As an example from inorganic chemistry the permanganate ion, MnO − 4 , in aqueous solution has an intense purple colour due to an O → Mn ligand-to-metal charge transfer band (LMCT) in much of the visible region. [ 22 ] This band shows a progression in the symmetric Mn–O stretching vibration. [ 23 ] The individual lines overlap each other extensively, giving rise to a broad overall profile with some coarse structure.
Progressions in vibrations which are not totally symmetric may also be observed. [ 24 ]
d – d electronic transitions in atoms in a centrosymmetric environment are electric-dipole forbidden by the Laporte rule . This will apply to octahedral coordination compounds of the transition metals . The spectra of many of these complexes have some vibronic character. [ 25 ] The same rule also applies to f – f transitions in centrosymmetric complexes of lanthanides and actinides . In the case of the octahedral actinide chloro-complex of uranium (IV), UCl 6 2− the observed electronic spectrum is entirely vibronic. At the temperature of liquid helium, 4 K, the vibronic structure was completely resolved, with zero intensity for the purely electronic transition, and three side-lines corresponding to the asymmetric U–Cl stretching vibration and two asymmetric Cl–U–Cl bending modes. [ 26 ] Later studies on the same anion were also able to account for vibronic transitions involving low-frequency lattice vibrations . [ 27 ]
|
https://en.wikipedia.org/wiki/Vibronic_spectroscopy
|
Vibroscope ( Latin : vibrare 'vibrate' + scope ) is an instrument for observing and tracing (and sometimes recording) vibration . [ 1 ] [ 2 ]
For example, a primitive mechanical vibroscope consists of a vibrating object with a pointy end which leaves a wave trace on a smoked surface of a rotating cylinder . [ 3 ]
Vibroscopes are used to study properties of substances. For examples, polymers ' torsional modulus and Young's modulus may be determined by vibrating the polymers and measuring their frequency of vibration under certain external forces. [ 4 ] Similar approach works to determine linear density of thread-shaped objects, such as fibers , filaments , and yarn . [ 5 ]
Vibroscopes are also used to study sound in different areas of the mouth during speech. [ 6 ]
Jean-Marie Duhamel published about an early recording device he called a vibroscope in 1843. [ 7 ]
This physics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vibroscope
|
Vicalloy is a family of cobalt-iron-vanadium wrought ferromagnetic alloys which have high coercivity and are used to make permanent magnets and other magnetic components. Vicalloy is precipitation hardened and can be formed by a number of cold working techniques. It is commonly used in electromechanical device applications, such as Wiegand wires because it shows a large Wiegand effect .
It consists of 52% cobalt , 10% vanadium , trace amounts of elements such as carbon and manganese, and balance (~37%) iron . [ 1 ] [ 2 ]
Its magnetic maximum energy product BH max is 1 MGOe when cast and as high as 3.5 MGOe when appropriately cold worked. [ 3 ]
'Vicalloy' was used in hysteresis motors both in solid and laminated form (for higher frequency applications) by Vactric Ltd and Walter Jones Ltd - neither company still exist- using material produced and processed by Telcon Ltd (now owned by Carpenter Technology Corporation ) from about 2008. 'Electrical Times' published an article 'Magnetic Alloys for Hysteresis Motors' by D.R. Driver on 10 August 1967 which includes some magnetic characteristics of Vicalloy and P6 alloy. Work on applications in hysteresis motors was done at Aston University, UK, (the late Nick Capoldi who joined Smiths Industries) and The Electrical Research Association, UK, (Tasker and Bradford), amongst others.
This alloy-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vicalloy
|
The " Vicar of Bray " hypothesis (or Fisher-Muller Model [ 1 ] ) attempts to explain why sexual reproduction might have advantages over asexual reproduction. Reproduction is the process by which organisms give rise to offspring. Asexual reproduction [ 2 ] involves a single parent and results in offspring that are genetically identical to each other and to the parent.
In contrast to asexual reproduction, sexual reproduction involves two parents. Both the parents produce gametes through meiosis , a special type of cell division that reduces the chromosome number by half. [ 3 ] During an early stage of meiosis, before the chromosomes are separated in the two daughter cells, the chromosomes undergo genetic recombination . This allows them to exchange some of their genetic information . [ 4 ] Therefore, the gametes from a single organism are all genetically different from each other. The process in which the two gametes from the two parents unite is called fertilization . Half of the genetic information from both parents is combined. This results in offspring that are genetically different from each other and from the parents.
In short, sexual reproduction allows a continuous rearrangement of genes. Therefore, the offspring of a population of sexually reproducing individuals will show a more varied selection of phenotypes . Due to faster attainment of favorable genetic combinations, sexually reproducing populations evolve more rapidly in response to environmental changes . Under the Vicar of Bray hypothesis, sex benefits a population as a whole, but not individuals within it, making it a case of group selection . [ 5 ] [ 6 ]
The hypothesis is called after the Vicar of Bray , a semi-fictionalized cleric who retained his ecclesiastic office by quickly adapting to the prevailing religious winds in England, switching between various Protestant and Catholic rites as the ruling hierarchy changed. [ 7 ] The figure described was Simon Aleyn between 1540 and 1588. The main work of Thomas Fuller (d. 1661), Worthies of England , describes this man: [ 8 ]
The vivacious vicar [of Bray ] living under King Henry VIII, King Edward VI, Queen Mary, and Queen Elizabeth, was first a Papist , then a Protestant, then a Papist, then a Protestant again. He had seen some martyrs burnt (two miles off) at Windsor and found this fire too hot for his tender temper. This vicar, being taxed [attacked] by one for being a turncoat and an inconstant changeling, said, "Not so, for I always kept my principle, which is this – to live and die the Vicar of Bray." [ 9 ] – Worthies of England , published 1662
The hypothesis was first expressed in 1889 by August Weismann [ 10 ] and later by Guenther (1906). [ 11 ] Afterwards, the hypothesis was formulated in terms of population genetics by Fisher (1930) [ 12 ] and Muller (1932) [ 13 ] and with greater mathematical formalism, by Muller (1958, 1964) [ 14 ] [ 15 ] and Crow and Kimura (1965). [ 16 ] The doubts about the validity of the Vicar of Bray hypothesis caused the upcoming of alternative hypotheses such as:
Mathematical models have been used in order to try to prove or disprove these hypotheses. However, for a mathematical model, assumptions must be made. Assumptions on the size of the population, the breeding process, the environment, natural enemies and so on. That is why there will always be populations for which the model does not apply. Some models are better in explaining the ‘average’ population, while others better explain the smaller populations or populations that live in a more extreme environment. A good way to decide which model is the best might be to compare the expected result from the model with data from natural observations. [ 17 ]
People who criticize the Vicar of Bray hypothesis (and all other hypotheses that propose sexual reproduction has an advantage over asexual reproduction) say that sexual reproduction might be beneficial in some situations, but not always, which is why both ways of reproduction still exist. If either sexual reproduction or asexual reproduction would be much more beneficial, evolution should result in one of the two ways of reproduction to disappear and the other one to persist.
|
https://en.wikipedia.org/wiki/Vicar_of_Bray_(scientific_hypothesis)
|
The Vicarious Hypothesis , or hypothesis vicaria , was a planetary hypothesis proposed by Johannes Kepler to describe the motion of Mars . [ 1 ] [ 2 ] [ 3 ] The hypothesis adopted the circular orbit and equant of Ptolemy's planetary model as well as the heliocentrism of the Copernican model . [ 4 ] [ 5 ] Calculations using the Vicarious Hypothesis did not support a circular orbit for Mars, leading Kepler to propose elliptical orbits as one of three laws of planetary motion in Astronomia Nova . [ 6 ]
In 1600, Johannes Kepler met and began working with Tycho Brahe at Benátky , a town north of Prague where Brahe's new observatory was being built. Brahe assigned Kepler the task of modeling the motion of Mars using only data that Brahe had collected himself. [ 3 ] Upon the death of Brahe in 1601, all of Brahe's data was willed to Kepler. [ 7 ] Brahe's observational data was among the most accurate of his time, which Kepler used in the construction of the Vicarious Hypothesis. [ 8 ]
Claudius Ptolemy's planetary model consisted of a stationary earth surrounded by fixed circles, called deferents , which carried smaller, rotating circles called epicycles . Planets rotated on the epicycles as the epicycles traveled along the deferent. Ptolemy shifted the Earth away from the center of the deferent and introduced another point, the equant , equidistant to the deferent's center on the opposite side of the Earth. [ 9 ]
The Vicarious Hypothesis uses a circular orbit for Mars and reintroduces a form of the equant to describe the motion of Mars with constant angular speed . [ 4 ]
Nicolaus Copernicus broke from the geocentric model of Ptolemy by placing the Sun at the center of his planetary model. However, Copernicus retained circular orbits for the planets and added an orbit for the Earth, insisting that the Earth revolved around the Sun. The Sun was positioned off-center of the orbits but was still contained within all orbits.
Kepler adopted Copernican heliocentrism in the construction of the Vicarious Hypothesis so that his measurements of the distances to Mars were taken relative to the Sun. [ 5 ]
Kepler's construction of the Vicarious Hypothesis was based on a circular orbit for Mars and a heliocentric model for the planets. [ 10 ] After receiving longitudinal observation data from Tycho Brahe, Kepler had twelve observations, two being his own, in which Mars was at opposition to the Sun. [ 11 ] From these twelve observations, Kepler chose four to form the basis of the Vicarious Hypothesis because they had a relatively uniform distribution across his proposed circular orbit for Mars. [ 4 ] In this sense, the Vicarious Hypothesis functions as a fit to observational data. [ 12 ] Kepler used these four observations to determine the eccentricities of the Sun and equant of his proposed orbit. [ 10 ] Unlike the Ptolemaic System, in which the Earth and equant were assumed equidistant to the center of the orbit, the Vicarious Hypothesis placed the equant where the time and location of the observation would match. [ 4 ]
Using the Vicarious Hypothesis, Kepler determined the eccentricities of the Sun and equant to be 11,332 and 7,232 arbitrary units, respectively, for the Martian orbital radius of 100,000 units. Using these positions for the Sun and equant, the model constructed using the Vicarious Hypothesis agreed with the twelve observations within 2' of arc , a level of accuracy better than any other previous model. [ 4 ] While the heliocentric longitudes of this model proved to be accurate, distances from the Sun to Mars, or latitudes of Mars, challenged the model. In his book, Astronomia Nova , Kepler determined that the eccentricity of the Sun, based on latitudinal oppositions, should be between a range of 8,000 and 9,943, conflicting with the eccentricity of 11,332 determined by the Vicarious Hypothesis. [ 3 ] To accommodate the latitudinal data, Kepler modified the Vicarious Hypothesis to include a bisected eccentricity , making the Sun and equant equidistant to the center of the orbit. [ 10 ] This resolved the error in the latitudes of Mars but introduced a longitudinal error of 8' of arc in some parts of the Mars orbit. [ 3 ] While an 8' error still had better accuracy than previous models, corresponding to approximately one-fourth the diameter of the Moon , Kepler rejected the Vicarious Hypothesis because he did not believe it was accurate enough to model the true orbit of Mars. [ 3 ] [ 10 ]
The errors in latitude and longitude of the Mars orbit made Kepler realize that false assumptions were made using the Vicarious Hypothesis. In particular, Kepler amended the hypothesis to exclude the circular orbit. [ 4 ] Kepler realized that he could fix the error by reducing the spread of the central region of the circular orbit, creating an ellipse . [ 7 ] He used calculations previously made with the Vicarious Hypothesis to confirm the elliptical orbit for Mars. [ 3 ] Kepler published his results in Astronomia Nova , in which he introduces the elliptical orbit for planets as his first law of planetary motion. [ 6 ]
|
https://en.wikipedia.org/wiki/Vicarious_Hypothesis
|
In organic chemistry , the vicarious nucleophilic substitution is a special type of nucleophilic aromatic substitution in which a nucleophile replaces a hydrogen atom on the aromatic ring and not leaving groups such as halogen substituents which are ordinarily encountered in S N Ar. This reaction type was reviewed in 1987 by Polish chemists Mieczysław Mąkosza and Jerzy Winiarski . [ 1 ] [ 2 ]
It is typically encountered with nitroarenes and especially with nucleophiles, resulting in alkylated arenes: the new substituent can take the ortho or para positions, reversing the selectivity for the meta position that is usually observed with such compounds under electrophilic substitution . Carbon nucleophiles carry an electron-withdrawing group and a leaving group : the nucleophile attacks the aromatic ring, and excess base can eliminate to form an exocyclic double bond which is successively protonated under acidic conditions, restoring aromaticity .
|
https://en.wikipedia.org/wiki/Vicarious_nucleophilic_substitution
|
Vicat softening temperature or Vicat hardness is the determination of the softening point for materials that have no definite melting point , such as plastics . It is taken as the temperature at which the specimen is penetrated to a depth of 1 mm by a flat-ended needle with a 1 mm 2 circular or square cross-section. For the Vicat A test, a load of 10 N is used. For the Vicat B test, the load is 50 N . It is named after the French engineer Louis Vicat .
Standards to determine Vicat softening point include ASTM D 1525 and ISO 306, which are largely equivalent. [ 1 ]
The vicat softening temperature can be used to compare the heat-characteristics of different materials.
Four different methods may be used for testing.
ISO 10350 Note
ISO 10350 Vicat values are tested using the B50 method.
Similar Standards: ASTM D1525
|
https://en.wikipedia.org/wiki/Vicat_softening_point
|
Vichy Catalán is a Spanish brand of carbonated mineral water bottled from its homonymous thermal spring in Caldes de Malavella , Girona . It is the leading carbonated mineral water in Spain, with 40% market share. [ 1 ] The brand is owned by Grup Vichy Catalan («Premium Mix Group S.L.») [ 2 ] ) by the physician and surgeon Modest Furest i Roca after buying the lands of the water spring in Caldes de Malavella , and discovering the mineral-medicinal properties of its thermal waters. In 2022, the global revenue of the beverage subsidiary amounted to 133.5 million euros, with a profit of 1.58 million euros and a workforce of 410 people. [ 3 ]
Content in this edit is translated from the existing Catalan Wikipedia article at Grup Vichy Catalan ; see its history for attribution.
|
https://en.wikipedia.org/wiki/Vichy_Catalán
|
In chemistry the descriptor vicinal (from Latin vicinus = neighbor), abbreviated vic , is a descriptor that identifies two functional groups as bonded to two adjacent carbon atoms (i.e., in a 1,2-relationship). It may arise from vicinal difunctionalization .
For example, the molecule 2,3-dibromobutane carries two vicinal bromine atoms and 1,3-dibromobutane does not. Mostly, the use of the term vicinal is restricted to two identical functional groups.
Likewise in a gem- dibromide the prefix gem , an abbreviation of geminal , signals that both bromine atoms are bonded to the same carbon atom (i.e., in a 1,1-relationship). For example, 1,1-dibromobutane is geminal. While comparatively less common, the term hominal has been suggested as a descriptor for groups in a 1,3-relationship. [ 1 ]
Like other descriptors, such as syn , anti , exo or endo , the description vicinal helps explain how different parts of a molecule are related to each other either structurally or spatially. The vicinal adjective is sometimes restricted to those molecules with two identical functional groups. The use of the term can also be extended to substituents on aromatic rings.
In 1 H-NMR spectroscopy , the coupling of two hydrogen atoms on adjacent carbon atoms is called vicinal coupling . The coupling constant 3 J represents coupling of vicinal hydrogen atoms because they couple through three bonds. Depending on the other substituents, the vicinal coupling constant is typically a value between 0 and +20 Hz. [ 2 ] The dependence of the vicinal coupling constant on the dihedral angle ϕ {\displaystyle \phi } is described by the Karplus relation .
|
https://en.wikipedia.org/wiki/Vicinal_(chemistry)
|
Vicinal difunctionalization refers to a chemical reaction involving transformations at two adjacent centers (most commonly carbons). This transformation can be accomplished in α,β-unsaturated carbonyl compounds via the conjugate addition of a nucleophile to the β-position followed by trapping of the resulting enolate with an electrophile at the α-position. When the nucleophile is an enolate and the electrophile a proton , the reaction is called Michael addition . [ 1 ]
Vicinal difunctionalization reactions, most generally, lead to new bonds at two adjacent carbon atoms. Often this takes place in a stereocontrolled fashion, particularly if both bonds are formed simultaneously, as in the Diels-Alder reaction . Activated double bonds represent a useful handle for vicinal difunctionalization because they can act as both nucleophiles and electrophiles —one carbon is necessarily electron poor, and the other electron rich. In the presence of a nucleophile and an electrophile, then, the two carbons of a double bond can act as a "relay," mediating electron flow from the nucleophile to the electrophile with the formation of two , rather than the usual one, chemical bonds.
(1)
Most often, the nucleophile employed in this context is an organometallic compound and the electrophile is an alkyl halide .
The mechanism proceeds in two stages: β-nucleophilic addition to the unsaturated carbonyl compound, followed by electrophilic substitution at the α-carbon of the resulting enolate .
When the nucleophile is an organometallic reagent, the mechanisms of the first step can vary. Whether reactions take place by ionic or radical mechanisms is unclear in some cases. [ 2 ] Research has shown that the second step may even proceed via single-electron transfers when the reduction potential of the electrophile is low. [ 3 ] A general scheme involving ionic intermediates is shown below.
(2)
Lithium organocuprates undergo oxidative addition to enones to give, after reductive elimination of an organocopper(III) species, β-substituted lithium enolates. [ 4 ]
In any case, the second step is well described in all cases as the reaction of an enolate with an electrophile. The two steps may be carried out as distinct experimental operations if the initially formed enolate is protected after β-addition. If the two steps are not distinct, however, the counterion of the enolate is determined by the counterion of the nucleophilic starting material and can influence the reactivity of the enolate profoundly.
Steric approach control is common in conjugate addition reactions. Thus, in cyclic substrates, a trans relationship between substituents on the α- and β-carbons is common. The configuration at the α-position is less predictable, especially in cases when epimerization can occur. On the basis of steric approach control, the new α-substituent is predicted to be trans to the new β-substituent, and this is observed in a number of cases. [ 5 ]
(3)
Organocopper reagents are the most common nucleophiles for the β-addition step. These reagents can be generated catalytically in the presence of Grignard reagents using either copper(I) or copper(II) salts. [ 6 ]
(4)
Copper reagents can also be used stoichiometrically, and among these, organocuprates are the most common (they are more reactive than the corresponding neutral organocopper(I) compounds). The cuprate counterion may affect the addition and subsequent enolate reaction in subtle ways. [ 7 ] Additions involving higher-order cuprates must be quenched with a silyl halide before alkylation. [ 8 ] (5)
When unsymmetrical cuprates are employed, the group whose carbon-copper bond contains less s character is almost always transferred to the β-position. A few exceptions exist, however. [ 9 ] In the example below, conducting the reaction in THF led to transfer of the vinyl moiety, while other solvents promoted methyl transfer.
(6)
Enolates can also be used as nucleophiles for vicinal difunctionalization reactions. To prevent simple Michael addition (which culminates in protonation of the enolate intermediate), trapping by the electrophile must be intramolecular. [ 10 ]
(7)
Considerations of the electrophile should take into account the nature of the conjugate enolate generated after the first step. Relatively reactive alkylating agents should be used, especially in cases involving the addition of cuprates (enolates resulting from the addition of cuprates are often unreactive). Oxophilic electrophiles should be avoided, if C-alkylation is desired. Electrophiles should also lack hydrogens acidic enough to be deprotonated by an enolate.
Cyclic α,β-unsaturated ketones are the most commonly employed substrates for vicinal difunctionalization. They tend to be more reactive than acyclic analogues and undergo less direct addition than aldehydes. Amides and esters can be used to encourage conjugate addition in cases when direct addition may be competitive (as in the addition of organolithium compounds). [ 11 ]
(8)
Because the addition step is highly sensitive to steric effects, β-substituents are likely to slow the reaction. Acetylenic and allenic substrates react to give products with some retained unsaturation. [ 12 ] [ 13 ]
(9)
A large number of examples of vicinal difunctionalization of unsaturated carbonyl compounds exist in the literature. In one example, the difunctionalization of unsaturated lactone 1 was employed en route to isostegane. This transformation was accomplished in one pot. [ 14 ]
(10)
Because the reaction creates two new bonds with a moderately high degree of stereocontrol, it represents a highly convergent synthetic method.
Organometallic nucleophiles used for conjugate additions are most often prepared in situ . The use of anhydrous equipment and inert atmosphere is necessary. Because these factors are sometimes difficult to control and the strength of freshly prepared reagents can vary substantially, titration methods are necessary to verify the purity of reagents. A number of efficient titration methodologies exist. [ 15 ]
Usually, vicinal difunctionalizations are carried out in one pot, without the intermediacy of a neutral protected enolate. However, in specific cases it may be necessary to protect the intermediate of β-addition. Before reaching this point, however, solvent and nucleophile screens, order of addition adjustments, and counterion adjustments can be made to optimize the one-pot process for a particular combination of carbonyl compound, nucleophile, and alkylating (or acylating) agent. Solvent adjustments between the two steps are common; if one solvent is used, tetrahydrofuran is the solvent of choice. Polar aprotic solvents should be avoided for the conjugate addition step. Concerning temperature, conjugate additions are usually carried out at low temperatures (-78 °C), while alkylations are carried out at slightly higher temperatures (0 to -30 °C). Less reactive alkylating agents may require room temperature.
(11)
To 6.25 g (50 mmol) of 4,4-dimethyl-2-cyclohexen-1-one and 0.5 g (5.6 mmol) of cuprous cyanide in 400 mL of diethyl ether at –23° under argon was added 100 mL (~0.75 M in diethyl ether) of 5-trimethylsilyl-4-pentynylmagnesium iodide during 4 hours. Methyl chloroformate (8 mL, 100 mmol) was added and stirring continued for 1 hour at –23° and 0.5 hour at room temperature. Hydrochloric acid (100 mL, 2.0 M) then was added and the organic phase separated and dried with magnesium sulfate . The solvent was removed and the residue chromatographed on silica gel using 5% diethyl ether – petroleum ether to give methyl 3,3-dimethyl-6-oxo-2-[5-(trimethylsilyl)-4-pentynyl]cyclohexanecarboxylate, 9.66 g (60%). IR 2000, 2140, 1755, 1715, 1660, 1615, 1440, 1280, 1250, 1225, 1205, and 845 cm–1; 1H NMR ( CDCl 3 ) δ 0.13 (s, 9H), 0.93 (s, 3H), 1.02 (s, 3H), 1.2–2.3 (m, 11H), 3.74 (s, 3H). Anal. Calc. for C 18 H 30 O 3 Si: C, 67.05; H, 9.4. Found: C, 67.1; H, 9.65.
|
https://en.wikipedia.org/wiki/Vicinal_difunctionalization
|
The vicious circle principle is a principle that was endorsed by many predicativist mathematicians in the early 20th century to prevent contradictions. The principle states that no object or property may be introduced by a definition that depends on that object or property itself. In addition to ruling out definitions that are explicitly circular (like "an object has property P iff it is not next to anything that has property P "), this principle rules out definitions that quantify over domains which include the entity being defined. Thus, it blocks Russell's paradox , which defines a set R that contains all sets which do not contain themselves. This definition is blocked because it defines a new set in terms of the totality of all sets, of which this new set would itself be a member.
However, it also blocks one standard definition of the natural numbers . First, we define a property as being " hereditary " if, whenever a number n has the property, so does n +1. Then we say that x has the property of being a natural number if and only if it has every hereditary property that 0 has. This definition is blocked, because it defines "natural number" in terms of the totality of all hereditary properties, but "natural number" itself would be such a hereditary property, so the definition is circular in this sense.
Most modern mathematicians and philosophers of mathematics think that this particular definition is not circular in any problematic sense, and thus they reject the vicious circle principle. But it was endorsed by many early 20th-century researchers, including Bertrand Russell and Henri Poincaré . On the other hand, Frank P. Ramsey and Rudolf Carnap accepted the ban on explicit circularity, but argued against the ban on circular quantification. After all, the definition "let T be the tallest man in the room" defines T by means of quantification over a domain (men in the room) of which T is a member. But this is not problematic, they suggest, because the definition does not actually create the person, but merely shows how to pick him out of the totality. Similarly, they suggest, definitions do not actually create sets or properties or objects, but rather just give one way of picking out the already existing entity from the collection of which it is a part. Thus, this sort of circularity in terms of quantification cannot cause any problems.
This principle was the reason for Russell's development of the ramified theory of types rather than the theory of simple types . (See "Ramified Hierarchy and Impredicative Principles". [ 1 ] )
An analysis of the paradoxes to be avoided shows that they all result from a kind of vicious circle. The vicious circles in question arise from supposing that a collection of objects may contain members which can only be defined by means of the collection as a whole. Thus, for example, the collection of propositions will be supposed to contain a proposition stating that “all propositions are either true or false.” It would seem, however, that such a statement could not be legitimate unless “all propositions” referred to some already definite collection, which it cannot do if new propositions are created by statements about “all propositions.” We shall, therefore, have to say that statements about “all propositions” are meaningless.… The principle which enables us to avoid illegitimate totalities may be stated as follows: “Whatever involves all of a collection must not be one of the collection”; or, conversely: “If, provided a certain collection had a total, it would have members only definable in terms of that total, then the said collection has no total.” We shall call this the “vicious-circle principle,” because it enables us to avoid the vicious circles involved in the assumption of illegitimate totalities. (Whitehead and Russell 1910, 37) (quoted in the Stanford Encyclopedia of Philosophy entry on Russell's Paradox )
|
https://en.wikipedia.org/wiki/Vicious_circle_principle
|
Victimless Leather (2004) is an art piece that represents a leather jacket without killing any animals. It is a prototype of a stitch-less jacket, grown from cell cultures into a layer of tissue supported by a coat shaped polymer layer. [ 1 ] [ 2 ] "Victimless Leather" was created as a sub-project of the Tissue Culture & Art Project , (also part of SymbioticA ) from the University of Western Australia and showcased at the Museum of Modern Art in New York. [ 3 ] [ 4 ] The artwork, a miniature jacket made from living mouse stem cells in an incubator, was designed to challenge perceptions of life and human responsibility toward manipulated living systems. This artistic grown garment is intended to confront people with the moral implications of wearing parts of dead animals for protective and aesthetic reasons and confronts notions of relationships with manipulated living systems. However, due to rapid cell growth, the exhibit was eventually "killed" by cutting off its nutrients, aligning with the creators' intent to remind viewers of the responsibility towards manipulated life. [ 5 ]
The SymbioticA website explains what it is, "an artistic laboratory dedicated to the research, learning, critique and hands-on engagement with the life sciences.". [ 6 ] The artists work with tissue, constructing and growing complex organisms that can live outside the body, making the new objects semi-living. Their projects are meant to question life, identity and the relationship between humans and other living beings and environments. They are also interested in the ethics around partial life and the possibilities around this type of technology in the future. [ 7 ]
These are some of the sub-projects that are also part of the TC&A project, in addition to Victimless Leather:
NoArk – NoArk is a collection of cells and tissue from many different organisms, growing together inside a "vessel", a reference to Noah's Ark . The project website states that NoArk is "a tangible as well as symbolic ‘craft’ for observing and understanding a biology that combines the familiar with the other". [ 8 ] Worry Dolls – Seven modern versions of the Guatemalan Worry doll were hand-crafted of degradable polymers and surgical sutures and seeded with skin, muscle and bone tissue. The tissue was then allowed to grow inside a bioreactor, replacing the polymers as it degrades, thus creating seven semi-living dolls. [ 9 ] Disembodied Cuisine – Along with Victimless Leather, this is part of the "Victimless Utopia". The idea for this project was to take cells from a frog and grow them into something possible to eat, displaying it next to the still living frog the food came from. This problematizes the whole meat industry – how we kill animals to eat them. Growing tissue outside of the body of the animal is a way to "make" meat while the animal stays healthy and living. [ 10 ] Extra Ear – ¼ Scale – Using human tissue, a quarter-scale replica of artist Stelarc 's ear was grown. The project wanted to confront the cultural perceptions of life now that we are able to manipulate living systems, and also discuss the notions of the wholeness of the body. [ 11 ] The Pig Wings Project – This project bases its ideas on xenotransplantation and genetically modifying pigs so their organs can be transplanted to humans. The artists used tissue engineering and stem cell technologies to grow pig bone tissue, forming three different sets of wings. One set shaped as bat wings ("evil"), one as bird wings ("good") and one as pterosaurs wings ("neutral"). [ 12 ]
An aim for the "Victimless Leather" project is to explore and provoke scientific truths, using conceptual art projects to encourage better understanding of cultural ideas around scientific knowledge. The project website point out how clothing has always been used to protect the fragile skin of humans, but lately have evolved into a fabricated object used as a tool to show one's identity. Based on this, clothing can be explored as a tangible example of the relationship between humans and others, and how humans treat others. As the artists put it: "This particular project will deconstruct our cultural meaning of clothes as a second skin by materialising it and displaying it as an art object." [ 1 ]
The intention to grow artificial leather without killing an animal is meant as a contribution to a cultural discussion between art, science and society. Making this jacket presents a possibility of wearing leather that is not part of a dead animal. The projects are presented in Workshops and exhibitions throughout the world to address ethical questions related to bioscience and technology . The artists want the project to be seen in this cultural context, not a commercial one. [ 13 ] Their intention is not to create a consumer product, but to offer a starting point for the cultural discussion points mentioned above. As with most of the TC&A projects the artists are concerned with the relationship between humans and other living systems, both natural and scientific made or manipulated. [ 1 ]
The artists wanted to make a leather-like material using living tissue, and ended up making it in the shape of a stitchless jacket. The artists based the jacket on a biodegradable polymer, coated it with 3T3 mouse cells to form connective tissue and topped it up with human bone cells in order to create a stronger skin layer.
To create the victimless leather, the team needed an artificial environment where semi-living entities are grown, so it is grown inside a bioreactor that acts as a surrogate body. The bioreactor used in this project was custom made, based on an organ perfusion pump designed by Alexis Carrel and Charles Lindbergh . It has an automatic dripping system which feeds the cells. [ 1 ] The artists assumed that when the polymer degraded, an integrated jacket would appear. The resulting jacket was tiny, about 2 inches high and 1,4 inches wide and would just fit a mouse. [ 14 ]
To idealize the "victimless" of the jacket, immortalized cell lines or cells that divide and multiply forever once they are removed from an animal or human host forming a renewable resource, have been used in the project. The 3T3 mouse cells all come from a mouse who lived in the 1970s. [ 15 ]
The research and development of “Victimless Leather” has been conducted in SymbioticA: the Art and Science Collaborative Research Laboratory, School of Anatomy and Human Biology at the University of Western Australia and in consultation with Professor Arunasalam Dharmarajan from the School of Anatomy and Human Biology as well as Verigen, a Perth-based company that specializes in tissue engineered cartilage for clinical applications. Western Australia state made the investment in the project through ArtsWa in Association with the Lotteries Commission. [ 16 ]
From a list of selected TC&A exhibitions. [ 17 ]
The victimless leather was featured in the exhibition “Design and the Elastic Mind” at The Museum of Modern Art in New York City , United States. [2] [3] February 24 – May 12, 2008 [ 18 ] The project was scheduled to grow continuously until May 12 when the exhibition ended. There were some problems though, when the leather started expanding too quickly, clogging tubes inside the bioreactor. The exhibition leader therefore decided to unplug the project before the end of the exhibition – in a way, killing it. [ 19 ]
Because the artificial leather is made semi-living, the project might be seen as speculative and provocative to some people – questioning whether it is better to kill semi-living beings rather than living beings. "One of the most common and somewhat surprising comments we heard was that people were disturbed by our ethics of using living cells to grow living fabric, while the use of leather obtained from animals seems to be accepted without any concern for the well-being of the animals from which the skin has been removed." artist Ionat Zurr uttered after showing the jacket to the public on "The Space Between" exhibition [ 20 ] in Australia. [ 14 ]
Oron Catts was born in Helsinki , Finland in 1967, and is currently residing in Perth , Australia where he has been employed at the University of Western Australia since 1996. [ 21 ] He works as an artistic director of SymbioticA, which he is also co-founder of. He is founder of the Tissue Culture & Art Project. From 2000–2001 he was a Research Fellow at the Tissue Engineering and Organ Fabrication Laboratory at Harvard Medical School . [ 22 ] He has also worked with numerous other bio-medical laboratories in several different countries. [ 21 ]
Ionat Zurr was born in London , UK in 1970, and is currently residing in Perth , Australia. Since 1996 she has been employed at the University of Western Australia , where she also did her PhD titled "Growing Semi-Living Art" under the Faculty of Architecture, Landscape and Visual Arts [4] . Zurr specialises in video production and biological and digital imaging. [ 23 ] She works as an assistant professor and academic coordinator at SymbioticA [ 24 ] and is co-founder of the Tissue Culture & Art Project. From 2000–2001 she was a research fellow at the Tissue Engineering and Organ Fabrication Laboratory at Harvard Medical School . [ 22 ]
The project also focuses on human consumption, highlighting that there are victims on every level of it, and that the boundaries between harm and benefit often are blurred. The French actress Brigitte Bardot strongly opposed hunting of animals for vanity and the commercial purpose of the skin. From an indigenous point of view, the context must be [ according to whom? ] seen different. An Inuk hunts the seal and uses the whole body of the animal as a basic resource for his survival. [ 25 ]
Similar to ethics like this, is the interest for artificial meat production due to reactions against the livestock sector because of health and environmental problems. A 2006 UN Food and Agriculture Organization FAO report pointed out the livestock sector as one of the most significant contributors to the most serious environmental problems like soil degradation and water pollution . And because of methane, it spews out more greenhouse gases into the atmosphere than transportation. [ 26 ]
|
https://en.wikipedia.org/wiki/Victimless_Leather
|
The victor symbol (Spanish: víctor or vítor ) is an emblem that is painted on the walls of some Spanish and Latin American universities to commemorate students who have received the degree of doctorate . The custom dates back to the 14th century, and the symbol has historically been used at older universities in the Spanish-speaking world, such as the University of Salamanca , [ 1 ] the University of Alcalá , and the University of Seville [ 2 ] in Spain, as well as the National University of San Marcos in Lima , Peru . [ 3 ] According to the custom, when a student receives the doctorate, the victor symbol is painted on the walls of the university in red or black paint along with the student's name.
At the end of the Spanish Civil War , the victor sign was appropriated by the nationalists as a symbol of their victory in the war, and it came to be used as a personal emblem for the dictator Francisco Franco . Despite its former use by Franco, it is still used in its original sense at several universities.
The victor symbol takes the shape of the letters V, I, C, T, O, and R arranged in a monogram that varies from symbol to symbol. In some cases, the letter C is omitted. Usually, the name of the student is painted alongside the symbol. The symbol is sometimes used to commemorate a notable person that visited the university or has some special connection with the university.
This Spain -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Victor_(symbol)
|
Victor Almon McKusick (October 21, 1921 – July 22, 2008) was an American internist and medical geneticist , and Professor of Medicine at the Johns Hopkins Hospital , Baltimore . [ 1 ] He was a proponent of the mapping of the human genome due to its use for studying congenital diseases. He is well known for his studies of the Amish . He was the original author and, until his death, remained chief editor of Mendelian Inheritance in Man (MIM) and its online counterpart Online Mendelian Inheritance in Man (OMIM). He is widely known as the "father of medical genetics". [ 2 ]
Victor and his identical twin Vincent L. McKusick were born on October 21, 1921. Victor was one of five children. His father was a graduate of Bates College . [ 1 ] Before deciding to work as a dairy farmer, Victor's father served as a high school principal in Chester, Vermont . Victor's mother had been an elementary school teacher before marrying. Victor and his siblings were raised on a dairy farm in Parkman, Maine . [ 2 ]
During the summer of 1937, Victor suffered a severe microaerophilic Streptococcus infection in his axilla . [ 3 ] As a result, Victor spent time in two hospitals, one of which was Massachusetts General Hospital . He finally saw a successful diagnosis and course of treatment, using sulfanilamide during his ten weeks at Massachusetts General. [ 1 ] Since none of his close family were doctors, the events of 1937 represented McKusick's first substantial experience with the medical community. He stated, "Perhaps I would have ended up a lawyer if it weren't for the microaerophilic streptococcus ." [ 2 ]
Victor married Anne Bishop McKusick in 1949. Anne served Johns Hopkins Hospital as associate professor of medicine in the Division of Rheumatology. [ 1 ] The couple had two sons, Victor and Kenneth, and a daughter, Carol. [ 4 ]
After high school, Victor chose to study at Tufts University , and studied there for six semesters from the fall of 1940 to the summer of 1942. [ 5 ] Although Tufts had an associated medical school, Victor was fascinated by Johns Hopkins and by its dedication to medical research, and chose to attend Hopkins Medical School instead.
During World War II The Johns Hopkins University School of Medicine could not fill its classes. Therefore, for the first time since the school's founding in 1893, the school temporarily discontinued requiring a baccalaureate degree for admission. [ 1 ] Victor applied during his sixth semester at Tufts, and began in the fall of 1942, as one of the first, of very few, who ever entered the school without a bachelor's degree. Victor never earned a baccalaureate degree, although he has been awarded over 20 honorary degrees. [ 6 ] He earned his Doctor of Medicine through an accelerated program in only three years. [ 5 ] He was offered the prestigious William Osler Internship in Internal Medicine at Johns Hopkins Hospital, and chose to remain at Hopkins for his residency. [ 1 ] He completed his residency training as a cardiologist , since the department of genetics did not exist at the time. McKusick specialized in heart murmurs, and utilized spectroscopy to analyze heart sounds. [ 2 ]
In 1956 McKusick traveled to Copenhagen to speak about the heritable disorders of connective tissue at the first international congress of human genetics. The meeting looms as the birthplace of the medical genetics field. [ 2 ] In the following decades, McKusick went on to head the Chronic Disease Clinic and created and chaired a new Division of Medical Genetics at Hopkins beginning in 1957. In 1973, he served as Physician-in-Chief, William Osler Professor of Medicine, and Chairman of the Department of Medicine at Johns Hopkins Hospital and School of Medicine. [ 7 ] McKusick resigned the appointments in 1985, but continued to teach, conduct research, and practice medicine in the Departments of Medicine and Medical Genetics. He held concurrent appointments as University Professor of Medical Genetics at the McKusick–Nathans Institute of Genetic Medicine, Professor of Medicine at the Johns Hopkins School of Medicine , Professor of Epidemiology at the Johns Hopkins Bloomberg School of Public Health , and Professor of Biology at Johns Hopkins University . [ 5 ] McKusick played a role in the development of the HeLa cell line that has been instrumental in biomedical research, although he did not reveal to the Lacks family all the details about subsequent blood draws which were for genotyping HeLa. [ 8 ] [ 9 ] He held numerous faculty appointments while remaining at Johns Hopkins until his death in 2008. [ 1 ]
In 1960, McKusick founded and co-directed the Annual Short Course in Medical and Experimental Mammalian Genetics at the Jackson Laboratory in Bar Harbor, Maine . [ 2 ] He published Mendelian Inheritance in Man (MIM) , which was the first published catalog of all known genes and genetic disorders, in 1966. [ 7 ] The complete text of MIM was made available online free of charge beginning in 1987, and titled Online Mendelian Inheritance in Man (OMIM) . [ 2 ] The 12th and final print edition was published in 1998. The online database is continually updated, and linked with the National Center for Biotechnology Information . [ 5 ] OMIM is distributed through the National Library of Medicine , and has been a part of the Entrez database network system since 1995. At the time of McKusick's death, OMIM contained 18,847 entries. He also led the Annual Course in Medical Genetics at the University of Bologna Residential Center in Bertinoro di Romagna, Italy in 1987. [ 10 ] McKusick was founding president of the Human Genome Organization in 1989. [ 4 ]
McKusick wrote extensively on the history of medicine, genetics, medical genetics, and about Parkman, Maine. He co-founded Genomics in 1987 with Frank Ruddle, and served as an editor. [ 7 ] He led a Congressionally-chartered committee examining the ethics of testing Abraham Lincoln's tissue for the presence of Marfan syndrome genes. [ 11 ]
His well-known published articles include:
In a 2005 paper presented by M.I. Poling, McKusick said:
I have always told my students, residents, and fellows, if you want to really get on top of some topic, you need to know how it got from where it was to how it is now. I was always strong on eponyms, too—like Marfan syndrome , Freeman–Sheldon syndrome , Down syndrome , Tay–Sachs disease , etc. On rounds, the resident or student would present a patient with some particular condition, and I would always ask, so who is so and so for whom the disease was named. This prompts thought and research into the disease or condition itself to find out who first described it and, therefore, for whom it was named. [ 3 ]
McKusick's study of genetics among the Amish is perhaps his most famous research. On his first trip to Amish homes, he was accompanied by David Krusen who had an extensive medical practice among the Amish in Lancaster, Pennsylvania . McKusick spoke about his introduction to Krusen's work, stating, "He [Krusen] indicated to the author of the article—in a slick-paper, pharmaceutical company 'throw-away'—that achondroplasia is frequent among the Amish. [ 15 ] Initial study led to the identification of two recessive conditions named Ellis–van Creveld syndrome and cartilage-hair hypoplasia (later named metaphyseal chondrodysplasia, McKusick type). [ 15 ]
McKusick listed fifteen advantages to studying genetics among the Amish. Today, these fifteen reasons are argued to be true as well. McKusick's findings led many other researchers to study hereditary related diseases in the 1960s and 1970s. Other researchers and McKusick cite the Amish as working cooperatively with researchers to determine the reason for inherited diseases. McKusick published his official findings from working with the Amish in 1978, titled Medical Genetic Studies of the Amish . [ 15 ]
McKusick received more than 20 honorary degrees throughout and after his career. [ 6 ] He was also a member of the United States National Academy of Sciences , [ 16 ] the American Philosophical Society , [ 17 ] and the American Academy of Arts and Sciences . [ 18 ]
Some of the awards he won are listed below:
McKusick died of cancer at the age of 86, on July 22, 2008. [ 3 ] He died at his home right outside of Baltimore, in Towson, Maryland . [ 1 ] On the 21st, the day before he died, he watched a live-stream of a course on medical genetics from Bar Harbor, Maine , which he helped found and direct in 1960. [ 5 ]
|
https://en.wikipedia.org/wiki/Victor_A._McKusick
|
Victor Moritz Goldschmidt ForMemRS (27 January 1888 – 20 March 1947) was a Norwegian mineralogist considered (together with Vladimir Vernadsky ) to be the founder of modern geochemistry and crystal chemistry, developer of the Goldschmidt Classification of elements.
Goldschmidt was born in Zürich , Switzerland on 27 January 1888. [ 1 ] : 7 His father, Heinrich Jacob Goldschmidt , (1857–1937) was a physical chemist at the Eidgenössisches Polytechnikum and his mother, Amelie Koehne (1864–1929), was the daughter of a lumber merchant. They named him Viktor after a colleague of Heinrich, Victor Meyer . His father's family was Jewish back to at least 1600 and mostly highly educated, with rabbis, judges, lawyers and military officers among their numbers. [ 2 ] As his father's career progressed, the family moved first to Amsterdam in 1893, to Heidelberg in 1896, and finally to Kristiania (later Oslo ), Norway in 1901, where he took over the physical chemistry chair at the university. The family became Norwegian citizens in 1905. [ 3 ]
Goldschmidt entered the University of Kristiania (later the University of Oslo ) in 1906 and studied inorganic and physical chemistry , geology , mineralogy , physics , mathematics , zoology and botany . [ 3 ] He secured a fellowship for his doctoral studies from the university at the age of 21 (1909). He worked on his thesis with the noted geologist Waldemar Christofer Brøgger and obtained his Norwegian doctor’s degree when he was 23 years old (1911). For his dissertation titled Die Kontaktmetamorphose im Kristianiagebiet ("The Contact Metamorphism in the Kristiania Region"), the Norwegian Academy of Sciences awarded him the Fridtjof Nansen award in 1912. The same year he was made Docent (Associate Professor) of Mineralogy and Petrography at the university. [ 3 ]
In 1914 Goldschmidt applied for a professorship in Stockholm and was offered the position. To entice him to stay, the University of Kristiania persuaded the government to establish a mineralogical institute with a professorship for him. [ 2 ] : 19 In 1929 Goldschmidt was appointed the chair of mineralogy in Göttingen , and he hired Reinhold Mannkopff and Fritz Laves as his assistants. [ 2 ] : 54, 58 However, after the rise of the Nazis to power, he became unhappy with the treatment of non-Aryans like himself (although the university treated him well) and he resigned in 1935 and returned to Oslo. [ 4 ] : 21 In 1937, he was invited by the Royal Society of Chemistry to give the Hugo Müller lecture. [ 5 ]
On 9 April 1940 the Germans invaded Norway. On 26 October 1942 Goldschmidt was arrested at the orders of the German occupying powers as part of the persecution of Jews in Norway during World War II. Taken to the Berg concentration camp , he became seriously ill and after a stay in a hospital near Oslo, he was released on 8 November, only to be rearrested on 25 November. However, as he was on the pier and about to be deported to Auschwitz , he was freed because some colleagues had persuaded the chief of police that his scientific expertise was essential to the state. [ 4 ] : 22 Goldschmidt soon fled to Sweden . [ 4 ] : 23
Goldschmidt was flown to England on 3 March 1943 by a British intelligence unit, and provided information about technical developments in Norway. After a short period of uncertainty about his future status, he was assigned to the Macaulay Institute for Soil Research (in Aberdeen) of the Agricultural Research Council . He participated in discussions about the German use of raw materials and production of heavy water . He attended open meetings in Cambridge, Manchester, Sheffield, Edinburgh and Aberdeen and lectured at the British Coal Utilisation Research Association on the presence of rare elements in coal ash . [ 6 ] [ 4 ] : 24 His British professional associates and contacts included Leonard Hawkes , C E Tilley and W H Bragg , J D Bernal , Dr W G (later Sir William) Ogg . [ 4 ] : 18, 24
Goldschmidt moved from Aberdeen to Rothamsted , where he was popular and nicknamed ‘Goldie’. However, he wanted to go back to Oslo – not welcomed by all Norwegians – and returned there on 26 June 1946, but died soon after, at age 59. [ 4 ] : 26
For his thesis, Goldschmidt studied the Oslo graben , a valley formed by the downward displacement of a block of land along faults on each side. The region had recently been mapped by Brøgger. In the Permian , magmas intruded into the older rocks, heating the surrounding rock. This resulted in mineralogical changes known as contact metamorphism , resulting in a fine-grained class of rocks known as hornfels . Goldschmidt made a systematic study of the hornfels . He showed that, of the minerals to be found in the hornfels, only certain associations occurred. For example, andalusite could be associated with cordierite but never with hypersthene . [ 2 ] : 13–14
From his data on the hornfels, Goldschmidt deduced a mineralogical phase rule . It is a special case of the Gibbs' phase rule for phases in thermodynamic equilibrium with each other, which states that
where C is the minimum number of chemical components , P is the number of phases , and F is the number of degrees of freedom (e.g., temperature and pressure) that can vary without changing C or P . As an example, the chemical compound Al 2 SiO 5 can occur naturally as three different minerals: andalusite , kyanite and sillimanite . There is a single component ( C = 1 ), so if all three minerals coexist ( P = 3 ), then F = 0 . That is, there are no degrees of freedom, so there is only one possible combination of pressure and temperature. This corresponds to the triple point in the phase diagram . [ 2 ] : 15–16
If the same mineral association is found in several rocks over some region, it must have crystallized at a range of temperatures and pressures. In that case, F must have been at least 2, so
This expresses Goldschmidt's mineralogical phase rule: the number of phases is no greater than the number of components. [ 8 ] [ 9 ]
In the early 20th century, Max von Laue and William L. Bragg showed that X-ray scattering could be used to determine the structures of crystals. In the 1920s and 1930s, Goldschmidt and associates at Oslo and Göttingen applied these methods to many common minerals and formulated a set of rules for how elements are grouped. Goldschmidt published this work in the series Geochemische Verteilungsgesetze der Elemente [Geochemical Laws of the Distribution of Elements] . [ 10 ] : 2 [ 11 ]
The majority of Goldschmidt's publications are in German or Norwegian . His English textbook, Geochemistry , was edited and published posthumously in 1954. [ 4 ] : 30 A complete list of his bibliography is compiled elsewhere. [ 12 ]
|
https://en.wikipedia.org/wiki/Victor_Goldschmidt
|
Francois Auguste Victor Grignard (6 May 1871 – 13 December 1935) was a French chemist who won the Nobel Prize [ 2 ] [ 3 ] for his discovery of the eponymously named Grignard reagent and Grignard reaction , both of which are important in the formation of carbon–carbon bonds . He also wrote some of his experiments in his laboratory notebooks. [ 4 ] [ 5 ]
Grignard was the son of a sailmaker. He was a hard-working student and was described as having a humble and friendly attitude. He also had a talent for mathematics. [ 1 ] After attempting to major in mathematics, Grignard failed his entrance exams before being drafted into the army in 1892. [ 6 ] [ 7 ] After one year of service, he returned to pursue his studies of mathematics at the University of Lyon and finally obtained his degree Licencié ès Sciences Mathématiques in 1894. [ 8 ] [ 9 ] In December of the same year, he transferred to chemistry and began working with Professors Philippe Barbier (1848–1922) and Louis Bouveault (1864–1909). After working with stereochemistry and énines, Grignard was not impressed with the subject matter and asked Barbier about a new direction for his doctoral research. [ 10 ] Barbier advised that Grignard study how a failed Saytzeff reaction using zinc , was successful, in low yields, after substitution of magnesium . [ 11 ] [ 10 ] They sought to synthesize alcohols from alkyl halides , aldehydes , ketones , and alkenes . [ 10 ] Grignard hypothesized that the aldehyde or ketone prevented the magnesium from reacting with the alkyl halide, accounting for the low yields. He tested his hypothesis by first adding an alkyl halide and magnesium filings to a solution of anhydrous ether and then adding the aldehyde or ketone. This resulted in a drastic increase in the yield of the reaction. [ 10 ]
A couple of years later, Grignard was able to isolate the intermediate. [ 11 ] He had heated a mixture of magnesium turnings and isobutyl iodide and added dry ethyl ether to the mixture, observing the reaction. [ 1 ] The product is known as a Grignard reagent. Named after him, this organo-magnesium compound (R-MgX) (R = alkyl ; X = Halogen) readily reacts with ketones, aldehydes, and alkenes to produce their respective alcohols in impressive yields. Grignard had discovered the synthetic reaction that now bears his name (the Grignard reaction ) in 1900. In 1901, he published his doctoral thesis titled "Thèses sur les combinaisons organomagnesiennes mixtes et leur application à des synthèses d‘acides, d‘alcools et d‘hydrocarbures". [ 12 ] He became a lecturer in organic chemistry at the University of Nancy in 1909, and was promoted to full professor in 1910. [ 1 ] In 1912 he and Paul Sabatier (1854–1941) were awarded the Nobel Prize in Chemistry . [ 13 ] [ 14 ] During World War I he studied chemical warfare agents with Georges Urbain at Sorbonne University , particularly the manufacture of phosgene and the detection of mustard gas . [ 11 ] In 1918, Grignard discovered that sodium iodide could be used as a battlefield test for mustard gas. Sodium iodide converts mustard gas to diiododiethyl sulfide, which crystallizes more easily than mustard gas. This test could detect as little as 0.01 gram of mustard gas in one cubic meter of air and was successfully used on the battlefield. [ 10 ] His counterpart on the German side was another Nobel Prize–winning chemist, Fritz Haber . [ 15 ]
Grignard died on 13 December 1935 in Lyon , at the age of 64. [ 16 ] [ 17 ] By that time, around 6,000 papers reporting applications of the Grignard reaction had been published. [ 18 ]
Grignard is most noted for devising a new method for generating carbon-carbon bonds using magnesium to couple ketones and alkyl halides. [ 19 ] This reaction is valuable in organic synthesis . It occurs in two steps:
Grignard was drafted into the French military as part of obligatory military service in 1892. Within the two years of his first session of service he rose to the rank of corporal. [ 20 ] He was demobilized in 1894 and returned to Lyon to pursue his education. [ 20 ] He was awarded a medal of the Legion of Honour and made a Chevalier in 1912 after winning the Nobel Prize. [ 20 ] When World War I broke out, Grignard was drafted back into the military, keeping his rank of corporal. [ 20 ] He was placed on sentry duty, and served there for several months until he was brought to the attention of the General Staff. [ 20 ] Grignard had been wearing his Medal of the Legion of Honour, despite being ordered to take it off by a superior. [ 20 ] After looking more into Grignard, the General Staff decided that he would be better suited for research than sentry duty, so they assigned him to the explosives division. [ 20 ] Grignard's research shifted to antidotes to chemical weapons when production of TNT was no longer sustainable, and eventually Grignard was assigned to research new chemical weapons for the French army. [ 20 ]
|
https://en.wikipedia.org/wiki/Victor_Grignard
|
The Victor Meyer apparatus is the standard laboratory method for determining the molecular weight of a volatile liquid . It was developed by Viktor Meyer , who spelled his name Victor in publications at the time of its development. In this method, a known mass of a volatile solid or liquid under examination is converted into its vapour form by heating in a Victor Meyer's tube. The vapour displaces its own volume of air. The volume of air displaced at experimental temperature and pressure is calculated. Then volume of air displaced at standard temperature and pressure is calculated. Using this, mass of air displaced at 2.24 × 10 −2 m 3 of vapour at STP is calculated. This value represents the molecular mass of the substance. The apparatus consists of an inner Victor Meyer's tube, lower end of which is in form of a bulb. The upper end of tube has a side tube that leads to a trough filled with water. The Victor Meyer's tube is surrounded by an outer jacket. In the outer jacket, a liquid is placed, which boils at a temperature at least 30 K higher than the substance under examination. A small quantity of glass-wool or asbestos pad covers the lower end of the Victor Meyer's tube to prevent breakage, when a glass bottle containing the substance under examination is dropped to it
The liquid in the outer jacket is heated until no more air escapes from the side tube. Then, a graduated tube filled with water is inverted over the side tube dipping in a trough filled with water. A small quantity of substance is weighed exactly in a small stoppered bottle and is dropped in the Victor Meyer's tube and sealed immediately. The bottle falls on the asbestos pad and its contents suddenly change into vapour, blows out the stopper and displaces an equal volume of air in graduated tube. The volume of air displaced is measured by taking the graduated tube out, closing its mouth with thumb and dipping in a jar filled with water. When water levels inside and outside the tube is equal, the volume of air displaced is noted. The atmospheric pressure and laboratory temperature are noted.
Victor Meyer suggested a method for determining the types of alcohol i.e. (primary, secondary or tertiary). In this method the sample alcohol is treated with PI 3 to get the iodoalkane which is again treated with AgNO 2 to get the nitroalkane . The nitroalkane is then treated with nitrous acid which is obtained by NaNO 2 and HCl . The resulting solution is treated with KOH and the colour is observed. The red, blue and no colour indicates the primary, secondary and tertiary alcohol respectively.
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Victor_Meyer_apparatus
|
Victor Snieckus (August 1, 1937 - December 18, 2020 [ 2 ] ) was a synthetic organic chemist and professor emeritus at Queen's University in Kingston, Ontario . He was known for his influential research on directed ortho metalation . [ 2 ] [ 3 ] [ 4 ]
Snieckus was born in Kaunas , Lithuania in 1937. His family lived in Germany during World War II and in 1948 immigrated to Alberta, Canada . Snieckus received his bachelor's degree in chemistry from University of Alberta in 1959, his master's degree from University of California, Berkeley in 1961, and his PhD from the University of Oregon in 1965 under the supervision of Virgil Boekelheide . [ 4 ] He spent a year as a postdoctoral scholar at the National Research Council of Canada . [ 2 ] [ 3 ]
In 1967 Snieckus joined the faculty of the University of Waterloo as an assistant professor , becoming an associate professor in 1971 and a full professor in 1979. [ 2 ] [ 3 ] He assumed the Monsanto /NRC research chair in 1992. [ 3 ] [ 5 ]
Snieckus relocated to Queen's University in 1998, where he assumed the Bader Chair of Chemistry. He retired and became professor emeritus in 2009. [ 2 ] He founded a company, Snieckus Innovations, the same year, originally funded by Alfred Bader . [ 6 ] Throughout his career Snieckus consulted for the pharmaceutical and agricultural industries. [ 2 ] [ 3 ]
Among other awards, Snieckus became a Fellow of the Royal Society of Canada in 1993, a Fellow of the Lithuanian Academy of Sciences in 1999, [ 4 ] and a Fellow of the American Chemical Society in 2009. [ 2 ] [ 4 ]
Snieckus served as an editor for a variety of academic journals, was the president of the International Society of Heterocyclic Chemistry in 1985, and chaired the American Chemical Society's Organic Division in 1989–90. [ 4 ] He co-organized an academic conference series, Balticum Organicum Syntheticum, a chemistry conference held in the Baltic States . [ 2 ] [ 7 ]
Snieckus' research interests in organic synthesis focused on metalation and particularly lithiation . He is best known for his work on the directed ortho metalation family of reactions. [ 2 ] [ 3 ] [ 8 ] His work has had practical applications in both academic and industrial settings, [ 4 ] particularly in the industrial-scale synthesis of pharmaceuticals and in an agricultural antifungal . [ 5 ] [ 6 ]
|
https://en.wikipedia.org/wiki/Victor_Snieckus
|
Norman Paterson School of International Affairs (M.A.)
Victoria Herrmann is an American polar geographer and climate change communicator . She is the managing director of The Arctic Institute , [ 1 ] a National Geographic Explorer, [ 2 ] and Assistant Research Professor at Georgetown University ’s Walsh School of Foreign Service , [ 3 ] where her research focuses on Arctic cooperation and politics and climate change adaptation in the US and US Territories.
Herrmann is also an American Association for the Advancement of Science (AAAS) IF/THEN Ambassador and works to empower girls and women in STEM . [ 4 ] She has been named on Forbes 30 Under 30 list, [ 5 ] the National Trust for Historic Preservation ’s 40 under 40 list, [ 6 ] a North American Young Leader by Friends of Europe, [ 7 ] one of 100 Most Influential People in Climate Policy worldwide by Apolitical, [ 8 ] and as part of the inaugural "CAFE 100 – extraordinary change-makers who are taking action to address some of the most pressing problems in America and around the world" by former US Attorney Preet Bharara . [ 9 ]
Born in Paramus, New Jersey , Herrmann took an early interest in environmental issues . [ 10 ] She was raised Jewish and has cited her grandparents’ experience as Holocaust survivors as the inspiration of her research and advocacy on the impacts of climate change on disenfranchised communities. [ 11 ] [ 12 ] [ 13 ] [ 14 ] She attended Paramus High School . [ 15 ]
In 2012 she completed a B.A. in International Relations and Art History at Lehigh University and was subsequently awarded a one-year Junior Fellowship at the Carnegie Endowment for International Peace in Washington, DC, where she worked on sustainable transport and climate policy in cities. [ 16 ] Herrmann moved to Canada in 2013 as a Fulbrightgrantee , completing an M.A. in International Affairs at Carleton University ’s Norman Paterson School of International Affairs . [ 17 ] In 2014 she was awarded a Gates Cambridge Scholarship for doctoral studies at the Scott Polar Research Institute . In 2017 Herrmann was awarded the Bill Gates Sr. Award for a commitment to improving the lives of others, [ 18 ] and in 2019 received her PhD from the University of Cambridge . In the last year of her PhD, Herrmann spent three months at the National Academies of Sciences, Engineering, and Medicine as a fellow in The Christine Mirzayan Science and Technology Policy Fellowship program. [ 19 ]
Herrmann joined The Arctic Institute in 2015, and in 2016 became the organization's President and managing director. [ 1 ] She directs strategic planning to achieve its mission to inform policy for a just, sustainable, and secure Arctic . Herrmann oversees the implementation of global research partnerships and manages a team across North America and Europe. Under Herrmann's tenure, The Arctic Institute has consistently ranked as a top-75 think tank by the University of Pennsylvania ’s Think Tanks and Civil Societies Program [ 20 ] and was shortlisted by Prospect Magazine as the best US Energy and Environment Think Tank . [ 21 ]
She is a recognized expert in Arctic policy, and has testified before the U.S. House of Representatives Homeland Security Committee [ 22 ] and has briefed the U.S. House of Representatives Foreign Affairs Committee [ 23 ] and the U.S. Senate Committee on Energy and Natural Resources on Arctic security and climate change. [ 24 ] In 2017-2018 she served as the Alaska Review Editor for the fourth National Climate Assessment [ 25 ] and currently serves as one of two US Delegates to the Social and Human Working Group of the International Arctic Science Committee . [ 26 ] Herrmann has sat on the Board of Directors of the Arctic Research Consortium of the U.S. since 2019 [ 27 ] and currently serves as a co-chair of the Arctic Youth Network Board of Directors. [ 28 ]
Herrmann's research focuses on climate-induced migration , displacement , and relocation in the Arctic, South Pacific, and United States. In 2016–2017, she served as the lead researcher for America's Eroding Edges project, a National Geographic-funded research project. [ 29 ] [ 30 ] She traveled across the country interviewing 350 local leaders to identify what's needed most to safeguard coastal communities against the unavoidable impacts of climate change . [ 31 ] In partnership with the National Trust for Historic Preservation and with support from a JMK Innovation Prize, a follow-up project to Eroding Edges is bringing technical assistance directly to small and medium-sized towns that are geographically remote and socioeconomically vulnerable. [ 32 ] Her current National Geographic-funded research project, Culture On The Move: Climate Change, Displacement, and Relocation in Fiji , investigates the consequences of climate-induced relation on cultural heritage . [ 33 ]
She was the inaugural Principal Investigator of the Research Coordination Network Arctic Migration in Harmony: An Interdisciplinary Network on Littoral Species, Settlements, and Cultures on the Move funded by a National Science Foundation . [ 34 ] [ 35 ] Herrmann developed the 700+ member international network to facilitate open communication, foster cross-disciplinary exchange, and build new collaboration teams of scientists, stakeholders, and practitioners to investigate the ways in which the drivers and consequences of Arctic coastal migrations intersect and interact with one another and identify the implications for society.
Herrmann works both as a science communicator for public audiences and as an academic researcher studying climate change communications . She has published more than 20 peer review journal articles and academic book chapters. [ 36 ] Her research focuses on how images used in mass media construct values, identities, and ideas of power about climate change displacement, vulnerable communities, and Arctic policy. Herrmann has argued that climate change scholarship can and should inform concrete action, and how action can enrich scholarship. In discussing her research at universities, she has encouraged other researchers to find their public voice and weigh the importance of storytelling for encouraging climate change action. [ 37 ] [ 38 ] [ 39 ]
Herrmann has given over 50 public talks, including keynote addresses at the National Trust for Historic Preservation ’s PastForward, [ 40 ] the Smithsonian Institution ’s Stemming the Tide: Global Strategies for Sustaining Culture Through Climate Change, [ 41 ] and the Hugh O’Brian Youth Leadership Foundation World Leadership Congress. [ 42 ] Herrmann advocates that “climate change is a story about losing the things that make us who we are”, and that "everyone has a part to play in climate solutions." [ 43 ]
As a National Geographic Explorer, Herrmann has given several public talks about climate change policy, storytelling, and community action. Her talks from National Geographic Society's stage include a Choose Your Own Adventure inspired presentation for CreativeMornings [ 44 ] and a keynote panel at the Explorers Festival, where she was featured in conversation with Andrew Revkin , Emma Marris , Leland Melvin , and Ian Stewart to discuss a planet in peril. [ 45 ] She has also presented for traveling National Geographic events like National Geographic On Campus. [ 46 ] Herrmann is passionate about youth empowerment , and has worked closely with National Geographic Education to increase climate awareness and opportunities for local action. She helped produce and was featured in the online course Teaching Global Climate Change in Your Classroom , [ 47 ] presented climate stories across America for the Explorer Classroom program, [ 48 ] and facilitated and mentor young storytellers at National Geographic Photo Camp for youth in Louisiana . [ 49 ] [ 50 ] In 2021, Herrmann was a featured Explorer in ABC Owned Television Stations Our America: Climate of Hope in partnership with National Geographic Partners . [ 51 ]
She frequently writes opinion pieces on climate change and Arctic policy for The Guardian , [ 52 ] Scientific American , [ 53 ] and CNN . [ 54 ] [ 55 ] Herrmann also appears often as an expert in the news, including NPR's Science Friday , [ 56 ] On Point , [ 57 ] All Things Considered , [ 58 ] and Weekend Edition ; [ 59 ] ABC News; [ 60 ] [ 61 ] and the BBC , [ 62 ] among others. In 2019 Herrmann was named an American Association for the Advancement of Science (AAAS) IF/THEN Ambassador, [ 4 ] and is an advocate for women's visibility in climate change research [ 63 ] [ 64 ] and girls engaging in STEM . [ 65 ] Herrmann has been featured as a role model for girls in STEM by the National Children's Museum [ 66 ] [ 67 ] [ 68 ] the Ad Council ’s She Can STEM campaign. [ 10 ]
|
https://en.wikipedia.org/wiki/Victoria_Herrmann
|
The Victoria Lines , originally known as the North West Front , are a line of fortifications that spans 12 kilometres along the width of Malta , dividing the north of the island from the more heavily populated south. [ 1 ]
The Victoria Lines run along a natural geographical barrier known as the Great Fault , from Madliena in the east, through the limits of the town of Mosta in the centre of the island, to Binġemma and the limits of Rabat , on the west coast. The complex network of linear fortifications known collectively as the Victoria Lines, that cut across the width of the island north of the old capital of Mdina , was a unique monument of military architecture .
When built by the British military in the late 19th century, the line was designed to present a physical barrier to invading forces landing in the north of Malta, intent on attacking the harbour installations, so vital for the maintenance of the British fleet, their source of power in the Mediterranean. Although never tested in battle, this system of defences, spanning some 12 km of land and combining different types of fortifications—forts, batteries, entrenchments, stop-walls, infantry lines, searchlight emplacements and howitzer positions—constituted a unique ensemble of varied military elements all brought together to enforce the strategy adopted by the British for the defence of Malta in the latter half of the 19th century, a singular solution which exploited the defensive advantages of geography and technology as no other work of fortifications does in the Maltese islands. [ 2 ]
The Victoria Lines owe their origin to a combination of international events and the military realities of the time. The opening of the Suez Canal in 1869 highlighted the importance of the Maltese islands. [ 2 ]
By 1872, the coastal works had progressed considerably, but the question of landward defences remained unsettled. Although the girdle of forts proposed by Colonel Jervois in 1866 would have considerably enhanced the defence of the harbour area, other factors had cropped up that rendered the scheme particularly difficult to implement, particularly the creation of suburbs. Another proposal, put forward by Col. Mann RE , was to take up a position well forward of the original.
The chosen position was the ridge of commanding ground north of the old City of Mdina, cutting transversely across the width of the island at a distance varying from 4 to 7 miles from Valletta . There, it was believed, a few detached forts could cut off all the westerly portion of the island containing good bays and facilities for landing. At the same time, the proposed line of forts retained the resources of the greater part of the country and the water on the side of the defenders; whereas the ground required for the building of the fortifications could be had far more cheaply than that in the vicinity of Valletta. Col. Mann estimated that the entire cost of the land and works of the new project would amount to £200,000, much less than would have been required to implement Jervois' scheme of detached forts.
This new defensive strategy was one which sought to seal off all the area around the Grand Harbour within an extended box-like perimeter, with the detached forts on the line of the Great Fault forming the north-west boundary, the cliffs to the south forming a natural, inaccessible barrier; while the north and east sides were to be defended by a line of coastal forts and batteries. In a way, the use of the Great Fault for defensive purposes was not an altogether original idea, for it had already been put forward by the Order of Saint John in the early decades of the 18th century, when they realized that they did not have the necessary manpower to defend the whole island. The Order had built a few infantry entrenchments at strategic places along the general line of the fault, namely, the Falca Lines and San Pawl tat-Tarġa, Naxxar . In fact, the use of parts of the natural escarpment for defensive purposes can be traced back even further, as illustrated by Nadur Tower at Bingemma (17th century), the Torri Falca (16th century) and the remains of a Bronze Age fortified citadel which possibly occupied the site of Fort Mosta . [ 3 ]
In 1873, the Defence Committee approved Adye’s defensive strategy and recommended the improvement of the already strong position between the Bingemma Hills and the heights above St. George’s Bay. Work on what was originally to be called the North-West Front began in 1875 with construction of a string of isolated forts and batteries, designed to stiffen the escarpment. Three forts were to be built along the position, at Bingemma, Madliena and Mosta, (designed to cover the western and eastern extremities and the centre of the front, respectively). The first to be built was Fort Bingemma. By 1878, work had still not commenced on the other two and the entrenched position at Dwerja; all of these were to be completed within the £200,000 budget. General Simmons recommended that the old Knights’ entrenchments located along the line of the escarpment at Tarġa and Naxxar were to be restored and incorporated into the defences. He also recommended that good communication roads should be formed in the rear of the lines and that those that already existed be improved. The fortifications of Mdina, the Island’s old capital, were to be considered as falling within the defensive system.
The forts on the defensive line were designed with a dual land/coastal defence role in mind, particularly the ones at the extremities but, due to the topography in the northern part of the island, there were areas of dead ground along the coast and inland approaches which could not be properly covered by the guns in the main forts. As a result, it was decided that new works should be built between Forts Mosta and Bingemma and emplacements for guns placed in them. It was also considered advisable to have new emplacements for guns built to the left of Fort Madalena and in the area between it and Fort Pembroke. The latter fort was built on the eastern littoral, below and to the rear of Fort Madalena, in order to control the gap caused by the accessible shoreline leading towards Valletta. Gun batteries were eventually proposed at Tarġa, Għargħur and San Giovanni. Plans for these works were drawn up but only the one at San Giovanni was actually built and armed, while the two at Għargħur were never constructed and that at Tarġa , although actually built, was never armed.
By 1888, the line of the cliffs formed by the great geological fault and the works which had been constructed along its length from Fort Bingemma on the left to Fort Madalena on the right constituted, in the words of Nicholson and Goodenough, "a military position of great strength". The main defects inherent in the defensive position were the extremities where the high ground descended towards the shore, leaving wide gaps through which enemy forces could by-pass the whole position. Particularly weak in this respect was the western extremity. There, a considerable interval existed between Fort Bingemma and the sea. Military manoeuvres held in the area revealed that it was possible for troops to land in Fomm ir-Riħ Bay and gain the rear of the fortified line undetected from the existing works. To counter this threat, recommendations were made for the construction of two epaulements for a movable armament of quick-firing or field guns, the construction of blockhouses, the improvement of the wall which closed the head of the deep valley to the south of Fort Bingemma and the strengthening of the line of cliffs by scarping in places. It was also suggested that the existing farmhouses in the area be made defensible.
There were even suggestions for the reconstruction and re-utilization of the old Hospitaller lines at ta' Falca and Naxxar , but only the latter was put to use, mainly because these commanded the approaches to the village of Naxxar, described as a position of great importance, in the event of a landing in St. Paul's Bay.
A serious shortcoming of the North West Front defences was the lack of barrack accommodation for the troops who were required to man and defend the works. The lines extended six miles and the accommodation provided in the forts was rather scanty. Consequently, it was considered necessary to build new barracks capable of accommodating a regiment (PRO MPH 234) and later a full battalion of infantry, and a new site was chosen to the rear of the Dwerja Lines, at Mtarfa. Although initially designed as a series of detached strong-points, the fortifications along the North West Front were eventually linked together by a continuous infantry line and the whole complex, by then nearing completion, was christened the Victoria Lines in order to commemorate the Diamond Jubilee of Queen Victoria in 1897. The long stretches of infantry lines linking the various strong-points—consisting in most places of a simple masonry parapet—were completed on 6 November 1899. [ 4 ]
The line of the intervening stretches followed the configuration of the crest of the ridge, along the contours of the escarpment . The nature of the wall varied greatly along its length but basically consisted of a sandwich-type construction with an outer and inner revetment, bonded at regular intervals and filled in with terreplein . The average height of the parapet was about five feet (1.5 metres). The walls were frequently topped by loopholes, of which only a very few sections have survived. In places, the debris from scarping was dumped in front of the wall to help create a glacis and ditch. In places, the rocky ground immediately behind the parapet was carved out to provide a walkway or patrol path along the length of the line. A number of valleys interrupted the line of the natural fault and, at such places, the continuation of the defensive perimeter was only permitted through the construction of shallow, defensible masonry bridges, as can still be seen today at Wied il-Faħam near Fort Madalena, Wied Anglu and Bingemma Gap. Other bridges, now demolished, existed at Mosta Ravine and Wied Filip.
During the last phase of their development, the Victoria Lines were strengthened by a number of batteries and additional fortifications. An infantry redoubt was built at the western extremity of the front at Fomm ir-Riħ and equipped with emplacements for Maxim machine guns . In 1897 a High Angle Battery was built well to the rear of the defensive lines at Għargħur and another seven howitzer batteries, each consisting of four emplacements for field guns protected by earthen traverses, were built close to the rear of the defensive line. Searchlight emplacements were built at il-Kunċizzjoni and Wied il-Faħam.
Military training exercises staged in May 1900 revealed that the Victoria Lines were of dubious defensive value. With the exception of the coastal forts, by 1907 they were abandoned altogether. During World War Two , a joint Germans-Italian invasion seemed likely so the lines were rehabilitated and new guard posts built along them as a second line of defence to the coastal defences. Again the lines were untested. [ 2 ] Fort Mosta is still in use as an ammunitions depot, while Fort Madalena is still used by the Communications Information Systems Company of the AFM.
In 1998 the Government of Malta submitted the Victoria Lines to UNESCO for consideration as a World Heritage Site . [ 5 ]
Large parts of the fortification walls have collapsed, although some parts in the countryside remain intact and in general the Victoria Lines have fallen into obscurity. The Maltese Tourism Authority is proposing that by the end of 2019 two trails along the Lines will become Malta’s inaugural national walkway. [ 2 ]
Notes
|
https://en.wikipedia.org/wiki/Victoria_Lines
|
Victoria F. Samanidou is a Greek analytical chemist . She is a professor at Aristotle University of Thessaloniki in Thessaloniki , Greece.
This biographical article about a Greek academic is a stub . You can help Wikipedia by expanding it .
This biographical article about a chemist is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Victoria_Samanidou
|
Vidarabine phosphate is an adenosine monophosphate nucleotide in which ribose is replaces by an arabinso moiety. It has antiviral and possibly antineoplastic properties. [ 1 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vidarabine_phosphate
|
Video is an electronic medium for the recording, copying , playback, broadcasting , and display of moving visual media . [ 1 ] Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) systems, which, in turn, were replaced by flat-panel displays of several types.
Video systems vary in display resolution , aspect ratio , refresh rate , color capabilities, and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcasts , magnetic tape , optical discs , computer files , and network streaming .
The word video comes from the Latin verb video, meaning to see or videre . And as a noun, "that which is displayed on a (television) screen," [ 2 ]
Video developed from facsimile systems developed in the mid-19th century. Early mechanical video scanners, such as the Nipkow disk , were patented as early as 1884, however, it took several decades before practical video systems could be developed, many decades after film . Film records using a sequence of miniature photographic images visible to the eye when the film is physically examined. Video, by contrast, encodes images electronically, turning the images into analog or digital electronic signals for transmission or recording. [ 3 ]
Video technology was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) television systems. Video was originally exclusively live technology. Live video cameras used an electron beam, which would scan a photoconductive plate with the desired image and produce a voltage signal proportional to the brightness in each part of the image. The signal could then be sent to televisions, where another beam would receive and display the image. [ 4 ] Charles Ginsburg led an Ampex research team to develop one of the first practical video tape recorders (VTR). In 1951, the first VTR captured live images from television cameras by writing the camera's electrical signal onto magnetic videotape .
Video recorders were sold for $50,000 in 1956, and videotapes cost US$300 per one-hour reel. [ 5 ] However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market . [ 6 ]
Digital video is capable of higher quality and, eventually, a much lower cost than earlier analog technology. After the commercial introduction of the DVD in 1997 and later the Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted. Advances in computer technology allow even inexpensive personal computers and smartphones to capture, store, edit, and transmit digital video, further reducing the cost of video production and allowing programmers and broadcasters to move to tapeless production . The advent of digital broadcasting and the subsequent digital television transition are in the process of relegating analog video to the status of a legacy technology in most parts of the world. The development of high-resolution video cameras with improved dynamic range and color gamuts , along with the introduction of high-dynamic-range digital intermediate data formats with improved color depth , has caused digital video technology to converge with film technology. Since 2013, [update] the use of digital cameras in Hollywood has surpassed the use of film cameras. [ 7 ]
Frame rate , the number of still pictures per unit of time of video, ranges from six or eight frames per second ( frame/s ) for old mechanical cameras to 120 or more for new professional cameras. PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa, etc.) specify 25 frame/s, while NTSC standards (United States, Canada, Japan, etc.) specify 29.97 frame/s. [ 8 ] Film is shot at a slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second. [ 9 ]
Video can be interlaced or progressive . In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is the optimum spatial resolution of both the stationary and moving parts of the image. Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second . Interlacing retains detail while requiring lower bandwidth compared to progressive scanning. [ 10 ] [ 11 ]
In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively and captured as two fields : an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines. Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display. [ 10 ] [ 11 ]
NTSC, PAL, and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often described as 576i50 , where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second. [ 11 ] [ 12 ]
When displaying a natively interlaced signal on a progressive scan device, the overall spatial resolution is degraded by simple line doubling —artifacts, such as flickering or "comb" effects in moving parts of the image, appear unless special signal processing eliminates them. [ 10 ] [ 13 ] A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD, or satellite source on a progressive scan device such as an LCD television , digital video projector , or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material. [ 11 ] [ 12 ] [ 13 ]
Aspect ratio describes the proportional relationship between the width and height of video screens and video picture elements. All popular video formats are rectangular , and this can be described by a ratio between width and height. The ratio of width to height for a traditional television screen is 4:3, or about 1.33:1. High-definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio ) is 1.375:1. [ 14 ] [ 15 ]
Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard and the corresponding anamorphic widescreen formats. The 720 by 480 pixel raster uses thin pixels on a 4:3 aspect ratio display and fat pixels on a 16:9 display. [ 14 ] [ 15 ]
The popularity of viewing video on mobile phones has led to the growth of vertical video . Mary Meeker , a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers , highlighted the growth of vertical video viewing in her 2015 Internet Trends Report – growing from 5% of video viewing in 2010 to 29% in 2015. Vertical video ads like Snapchat 's are watched in their entirety nine times more frequently than landscape video ads. [ 16 ]
The color model uses the video color representation and maps encoded color values to visible colors reproduced by the system. There are several such representations in common use: typically, YIQ is used in NTSC television, YUV is used in PAL television, YDbDr is used by SECAM television, and YCbCr is used for digital video. [ 17 ] [ 18 ]
The number of distinct colors a pixel can represent depends on the color depth expressed in the number of bits per pixel. A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, etc.). Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block, and the same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2-pixel blocks (4:2:2) or 75% using 4-pixel blocks (4:2:0). This process does not reduce the number of possible color values that can be displayed, but it reduces the number of distinct points at which the color changes. [ 12 ] [ 17 ] [ 18 ]
Video quality can be measured with formal metrics like peak signal-to-noise ratio (PSNR) or through subjective video quality assessment using expert observation. Many subjective video quality methods are described in the ITU-T recommendation BT.500 . One of the standardized methods is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video, followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying."
Uncompressed video delivers maximum quality, but at a very high data rate . A variety of methods are used to compress video streams, with the most effective ones using a group of pictures (GOP) to reduce spatial and temporal redundancy . Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression . Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression , including motion compensation and other techniques. The most common modern compression standards are MPEG-2 , used for DVD , Blu-ray, and satellite television , and MPEG-4 , used for AVCHD , mobile phones (3GP), and the Internet. [ 19 ] [ 20 ]
Stereoscopic video for 3D film and other applications can be displayed using several different methods: [ 21 ] [ 22 ]
Different layers of video transmission and storage each provide their own set of formats to choose from.
For transmission, there is a physical connector and signal protocol (see List of video connectors ). A given physical link can carry certain display standards that specify a particular refresh rate, display resolution , and color space .
Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video coding format , for which a number is available.
Analog video is a video signal represented by one or more analog signals . Analog color video signals include luminance (Y) and chrominance (C). When combined into one channel, as is the case among others with NTSC , PAL , and SECAM , it is called composite video . Analog video may be carried in separate channels, as in two-channel S-Video (YC) and multi-channel component video formats.
Analog video is used in both consumer and professional television production applications.
Digital video signal formats have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface.
Video can be transmitted or transported in a variety of ways including wireless terrestrial television as an analog or digital signal, coaxial cable in a closed-circuit system as an analog signal. Broadcast or studio cameras use a single or dual coaxial cable system using serial digital interface (SDI). See List of video connectors for information about physical connectors and related signal standards.
Video may be transported over networks and other shared digital communications links using, for instance, MPEG transport stream , SMPTE 2022 and SMPTE 2110 .
Digital television broadcasts use the MPEG-2 and other video coding formats and include:
Analog television broadcast standards include:
An analog video format consists of more information than the visible content of the frame. Preceding and following the image are lines and pixels containing metadata and synchronization information. This surrounding margin is known as a blanking interval or blanking region ; the horizontal and vertical front porch and back porch are the building blocks of the blanking interval.
Computer display standards specify a combination of aspect ratio, display size, display resolution, color depth, and refresh rate. A list of common resolutions is available.
Early television was almost exclusively a live medium, with some programs recorded to film for historical purposes using Kinescope . The analog video tape recorder was commercially introduced in 1951. The following list is in rough chronological order. All formats listed were sold to and used by broadcasters, video producers, or consumers; or were important historically. [ 23 ] [ 24 ]
Digital video tape recorders offered improved quality compared to analog recorders. [ 24 ] [ 26 ]
Optical storage mediums offered an alternative, especially in consumer applications, to bulky tape formats. [ 23 ] [ 27 ]
A video codec is software or hardware that compresses and decompresses digital video . In the context of video compression, codec is a portmanteau of encoder and decoder , while a device that only compresses is typically called an encoder , and one that only decompresses is a decoder . The compressed data format usually conforms to a standard video coding format . The compression is typically lossy , meaning that the compressed video lacks some information present in the original video. A consequence of this is that decompressed video has lower quality than the original, uncompressed video because there is insufficient information to accurately reconstruct the original video. [ 28 ]
|
https://en.wikipedia.org/wiki/Video
|
Video Share is an IP Multimedia System (IMS) enabled service for mobile networks that allows users engaged in a circuit switch voice call to add a unidirectional video streaming session over the packet network during the voice call. Any of the parties on the voice call can initiate a video streaming session. There can be multiple video streaming sessions during a voice call, and each of these streaming sessions can be initiated by any of the parties on the voice call. The video source can either be the camera on the phone or a pre-recorded video clip.
Video share is initiated from within a voice call. After a voice call is established, either party (calling or called) can start a Video Share (VS) session. The sending User is then able to stream one-way live or recorded video. The default behavior is that the receiving handset will automatically go to speakerphone mode when video is received, unless the headset is in place. The sender will be able to see what is being streamed on their handset, along with the receiving User. In this scenario, the sender can “narrate” over the CS audio connection while both parties view the video. Both users will have the ability initiate a video share session, and either the sender or recipient in a video share session can terminate the session at any time. As part of the VS invitation, the recipient can choose to reject the streamed video. It is intended that both sender and receiver will receive feedback when the other party terminates a session or the link drops due to lack of coverage.
The Video Share service is defined by the GSM Association ( GSMA ). It is often referred to as a Combinational Service, meaning that the service combines a circuit switch voice call with a packet switch multimedia session. This concept is described in the 3rd Generation Partnership Project (3GPP) specification documents 3GPP TS 22.279, 3GPP TS 23.279 and 3GPP TS 24.279. The Video Share service requires a 3GPP compliant IMS core system .
GSM Association has split the Video Share service definition [ 1 ] into 2 distinct phases. The first phase (also called Phase 1) involves sharing a simple peer-to-peer, one-way video stream in conjunction with, but not synchronized to a circuit switch voice call. The second phase (also called Phase 2) introduces the Video Share Application Server in the solution and supports more complex features and capabilities, such as point-to-multipoint video share calls, video streaming to a web portal, and integration of video share with instant messaging.
In the industry, Video Share is also referred to by other names such as See What I See and Rich Voice Call .
Video Share is supported only in UMTS and EDGE (with DTM) networks. It is not supported in a GPRS or a CDMA network. The Video Share Client will drop a VS session when the handset transitions from UMTS to GSM during the session. The CS voice call will remain connected.
AT&T (formerly Cingular) is one of mobile operators who have deployed the Video Share service nationwide.
Peer-to-peer video sharing was introduced by Nokia phones first in 2004. This was a proprietary solution on top of a SIP or IMS infrastructure. Some European operators offered commercial services based on these phones already in 2005. Similar services popped under the names of See What I See, Rich Voice Call, Push-to-Video (P2video or PTV), etc.
The GSMA Video Share service [ 1 ] was originally defined, implemented and tested during the Session Initiation Protocol (SIP) trials conducted by the GSM Association in 2005/2006. During the SIP trials, the Video Share service used to demonstrate IMS interworking over SIP. [ 2 ] Video Share was also tested on the IPX [ 3 ] to prove that the service might become universally available in the future.
Subsequently, GSMA decided to create a separate project for Video Share. Phase 1 of the Video Share project built on and leveraged the results from the SIP trials. Service definition for the first phase of the Video Share project was completed in September/October 2006. Mobile operators worldwide, such as AT&T, have deployed the Video Share service [ 4 ] based on the Phase 1 service definition. An interoperability technical reference specification for Video Share [ 5 ] is also available from GSMA.
Phase 2 of the GSMA Video Share project was kicked off in May/June 2007 and is currently in progress.
Video Share is sometimes confused with traditional two-way Video Call service. Video Call involves simultaneous two-way Video and Audio transmission between the 2 parties (from start to finish), whereas Video Share involves adding and removing one or more one-way Video sessions to an existing voice call between the 2 parties. There are other subtle differences between the two services as far as the user experience is concerned:
Extensions to Video Share include Video Clip Sharing, where a video clip recorded on the phone (or resident in the network) can be shared between two parties – something not delivered in a typical Video Call implementation.
The Phase 2 Video Share solution consists of a Client Application running on mobile handsets and an Application Server deployed in the mobile network. The Phase 1 Video Share architecture does not include an Application Server, i.e. media is transferred directly between the terminals. The Video Share service uses the standard IMS Core infrastructure to transmit signaling and media traffic. IP Packet Exchange (IPX) proxies may be part of this infrastructure to allow interconnection between operators and to provide a collection point for session accounting records used for inter-operator traffic charging.
The Video Share (VS) Client is a software application running on the mobile handset. Typically, the Video Share Client is implemented as a native application running on mobile operating systems such as Windows Mobile, Symbian, Linux, and proprietary RTOS. VS-compliant handsets contain an ISIM/USIM properly provisioned with IMS public/private identities and access credentials. A user’s subscription is usually bound to their smart card (ISIM/USIM) such that the Video Share service is portable in the sense that the user may be able to send and receive video share on any capable handset.
The Video Share Client supports SIP and RTP/RTCP transmission. SIP is used for call control and signaling, and RTP/RTCP is used for video transmission. Functionality supported by a GSMA Video Share Client includes:
The Video Share Application Server is an IMS Application Server that interfaces with the S-CSCF network element in the IMS network through the 3GPP-defined ISC interface. The Application Server supports the SIP Back-to-Back User Agent (B2BUA) call control architecture that enables service policy control and enforcement capabilities of the video share session. The Video Share Application Server typically runs on a carrier grade fault tolerant hardware platform.
Functionality supported by the Video Share Application Server includes:
The basic steps involved in setting up and tearing down a Video Share session are as follows:
The Video Share session begins with Circuit Switch call between User A and User B. The next step is the Capability Exchange in which the other handset is queried to determine if the recipient is capable of supporting a Video Share session. This is performed with the SIP OPTIONS method. Both handsets can perform this capability exchange. The Video Share session is initiated by sending a SIP INVITE message to the called party.
After the Video Share session has been set up, transmission of the actual video can begin. Video is sent between Video Share clients using RTP (Real-Time Transport Protocol), which is widely used in internet and mobile communities for video streaming. The video transport is augmented by a control protocol (RTCP) to allow monitoring of the data delivery using RTCP RR (Receiver Report) and RTCP SR (Sender Report) packets. When either party decides to stop the Video Share session, the session will be torn down (using RTCP BYE) and the SIP session stopped (using SIP BYE). After these steps the Circuit Switch voice call session still exists.
In the case of a Web Portal-based Video Share session, the video is streamed to the Portal instead of User B, and accessed using a PC with a web browser.
There are multiple options for deploying the Video Share service.
|
https://en.wikipedia.org/wiki/Video_Share
|
The NewTek Video Toaster is a combination of hardware and software for the editing and production of NTSC standard-definition video. The plug-in expansion card initially worked with the Amiga 2000 computer and provides a number of BNC connectors on the exposed rear edge that provide connectivity to common analog video sources like VHS VCRs. The related software tools support video switching , luma keying , character generation , animation , and image manipulation . [ 1 ]
For a few thousand U.S. dollars, the hardware and software provided a video editing suite in the early 1990s that rivaled the output of contemporary professional systems costing ten times as much. It allowed small studios to produce high-quality material and resulted in a cottage industry for video production not unlike the success of the Macintosh in the desktop publishing ( DTP ) market only a few years earlier. The Video Toaster won the Emmy Award for Technical Achievement in 1993. [ 2 ] Other parts of the original software package were spun off as stand-alone products, notably LightWave 3D , and achieved success on their own.
As the Amiga platform lost market share and Commodore International went bankrupt in 1994 as a result of declining sales, the Video Toaster was moved to the Microsoft Windows platform where it is still available. The company also produced what is essentially a portable pre-packaged version of the Video Toaster along with all the computer hardware needed, as the TriCaster . These became all-digital units in 2014, ending production of the analog line.
The Video Toaster was designed by NewTek founder Tim Jenison in Topeka , Kansas . Engineer Brad Carvey built the first wire wrap prototype, and Steve Kell wrote the software for the prototype. Many other people worked on the Toaster as it developed. [ 3 ]
The Toaster was announced at the World of Commodore expo in 1987 [ 4 ] and released as a commercial product in December 1990 [ 5 ] for the Commodore Amiga 2000 computer system, taking advantage of the video-friendly aspects of that system's hardware to deliver the product at an unusually low cost of $2,399. [ 5 ] The Amiga was well adapted to this application in that its system clock at 7.158 MHz was precisely double that of the NTSC color carrier frequency , 3.579 MHz , allowing for simple synchronization of the video signal. [ citation needed ] The hardware component is a full-sized card that is installed into the Amiga 2000 's unique single video expansion slot rather than the standard bus slots, and therefore cannot be used with the A500 or A1000 models. The card has several BNC connectors in the rear, which accepts four video input sources and provided two outputs (preview and program). This initial generation system is essentially a real-time four-channel video switcher .
One feature of the Video Toaster is the inclusion of LightWave 3D , a 3D modeling, rendering, and animation program. This program became so popular in its own right that in 1994 it was made available as standalone product separate from the Toaster systems. [ 6 ]
Aside from simple fades, dissolves , and cuts, the Video Toaster has a large variety of character generation, overlays and complex animated switching effects. These effects are in large part performed with the help of the native Amiga graphics chipset , which is synchronized to the NTSC video signals. As a result, while the Toaster was rendering a switching animation, the computer desktop display is not visible. While these effects are unique and inventive, they cannot be modified. Soon Toaster effects were seen everywhere, advertising the device as the brand of switcher those particular production companies were using.
The Toaster hardware requires very stable input signals, and therefore is often used along with a separate video sync time-base corrector to stabilize the video sources. Third-party low-cost time-base correctors (TBCs) specifically designed to work with the Toaster quickly came to market, most of which were designed as standard ISA bus cards, taking advantage of the typically unused Bridgeboard slots. The cards do not use the Bridgeboard to communicate, but simply as a convenient power supply and physical location.
As with all video switchers that use a frame buffer to create DVEs (digital video effects), the video path through the Toaster hardware introduced delays in the signals when the signal was in "digital" mode. Depending on the video setup of the user, this delay could be quite noticeable when viewed along with the corresponding audio, so some users installed audio delay circuits to match the Toaster's video-delay lag, as is common practice in video-switching studios.
A user still needs at least three video tape recorders (VTR) and a controller to perform A/B roll linear video editing (LE), as the Toaster serves merely as a switcher, which can be triggered through general-purpose input/output (GPIO) to switch on cue in such a configuration, as the Toaster has no edit-controlling capabilities. The frame delays passing through the Toaster and other low-cost video switchers make precise editing a frustrating endeavor. Internal cards and software from other manufacturers are available to control VTRs; the most common systems go through the serial port to provide single-frame control of a VTR as a capture device for LightWave animations. A Non-linear editing system (NLE) product was added later, with the invention of the Video Toaster Flyer.
Although initially offered as just an add-on to an Amiga, the Video Toaster was soon available as a complete turn-key system that included the Toaster, Amiga, and sync generator . [ citation needed ] These Toaster systems became very popular, primarily because at a cost of around US$5,000, they could do much of what a $100,000 fully professional video switcher (such as a Grass Valley switcher) could do at that time. [ citation needed ] The Toaster was also the first such video device designed around a general-purpose personal computer that is capable of delivering broadcast quality NTSC signals. [ citation needed ]
As such, during the early 1990s the Toaster was widely used by consumer Amiga owners, desktop video enthusiasts, and local television studios, and was even used during The Tonight Show regularly to produce special effects for comedy skits . It was often easy to detect a studio that used the Toaster by the unique and recognizable special switching effects. [ 7 ] The NBC television network also used the Video Toaster with LightWave for its promotional campaigns, beginning with the 1990-1991 broadcast season ("NBC: The Place To Be!"). [ 8 ] [ 9 ] All of the external submarine shots in the TV series seaQuest DSV were created using LightWave 3D , as were the outer-space scenes in the TV series Babylon 5 (although Amiga hardware was only used for the first three seasons). Because of the heavy use of dark blues and greens (for which the NTSC television standard is weak), the external submarine shots in seaQuest DSV could not have made it to air without the use of the ASDG Abekas driver , written specifically to solve this problem by Aaron Avery at ASDG (later Elastic Reality , Inc.). This was due to "ASDG's exclusive color encoding technology which increases the apparent color bandwidth of video". [ 10 ]
An updated version called Video Toaster 4000 was later released, using the Amiga 4000 's video slot. The 4000 was co-developed by actor Wil Wheaton , then famous for Star Trek: The Next Generation , who worked on product testing and quality control. [ 11 ] [ 12 ] He later used his public profile to serve as a technology evangelist for the product. [ 5 ] Besides Wheaton, Penn Jillette (of Penn and Teller fame) and skateboarder Tony Hawk also served as evangelists for the 4000. Hawk was given a Video Toaster 4000 by NewTek upon learning that he was an Amiga user, in exchange for appearing in a promotional video for the product. [ 13 ] Tony Hawk later used the Toaster for editing a promotional video for the TurboDuo game Lords of Thunder in 1993. [ 14 ] [ 15 ] The Amiga Video Toaster 4000 source code was released in 2004 by NewTek & DiscreetFX.
For the second generation NewTek introduced the Video Toaster Flyer . The Flyer is a much more capable non-linear editing system . In addition to just processing live video signals, the Flyer makes use of hard drives to store video clips as well as audio and allow complex scripted playback. The Flyer is capable of simultaneous dual-channel playback, which allows the Toaster's video switcher to perform transitions and other effects on video clips without the need for rendering .
The hardware component is again a card designed for the Amiga's Zorro II expansion slot, and was primarily designed by Charles Steinkuehler. The Flyer portion of the Video Toaster/Flyer combination is a complete computer of its own, having its own microprocessor and embedded software , which was written by Marty Flickinger. Its hardware includes three embedded SCSI controllers. Two of these SCSI buses are used to store video data, and the third to store audio. The hard drives are thus connected to the Flyer directly and use a proprietary filesystem layout, rather than being connected to the Amiga's buses and were available as regular devices using the included DOS driver. The Flyer uses a proprietary Wavelet compression algorithm known as VTASC, which was well-regarded at the time for offering better visual quality than comparable motion-JPEG -based nonlinear editing systems.
One of the card's primary uses is for playing back LightWave 3D animations created in the Toaster.
In 1993, NewTek announced the Video Toaster Screamer , a parallel extension to the Toaster built by DeskStation Technology , with four motherboards , each with a MIPS R4400 CPU running at 150 MHz and 64 MB of RAM. The Screamer accelerated the rendering of animations developed using the Toaster's bundled Lightwave 3D software, and is supposedly 40 times as powerful as a Toaster 4000. Only a handful of test units were produced before NewTek abandoned the project and refocused on the Flyer. This cleared the way for DeskStation Technology to release their own cut-down version, the Raptor. [ 16 ]
Later generations of the product run on Windows NT PCs. In 2004, the source code for the Amiga version was publicly released and hosted on DiscreetFX's site Open Video Toaster. With the additions of packages such as DiscreetFX's Millennium and thousands of wipes and backgrounds added over the years, one can still find the Video Toaster systems in use today in fully professional systems. NewTek renamed the VideoToaster to "VideoToaster[2]", and later, "VT[3]" for the PC version and is now at version 5.3. Since VT[4] version 4.6, SDI switching is supported through an add-on called SX-SDI.
NewTek released a spin-off product, known as the TriCaster, a portable live-production, live-projection, live-streaming, and NLE system. The TriCaster packaged the VT system as a turnkey solution in a custom-designed portable PC case with video, audio and remote computer inputs and outputs on the front and back of the case. As of April 2008, four versions were in production: the basic TriCaster 2.0, TriCaster PRO 2.0, TriCaster STUDIO 2.0 and the TriCaster BROADCAST, the latter of which added SDI and AES-EBU connectivity plus a preview output capability. The TriCaster PRO FX, a model that was situated in line between the original TriCaster PRO and TriCaster STUDIO was introduced in early 2008, and was discontinued. Its feature set was added to the TriCaster PRO 2.0. TriCaster STUDIO 2.0 and TriCaster BROADCAST which uses successively larger cases than the base model TriCaster 2.0. The units within the product line above the base-model TriCaster 2.0 enables use of LiveSet 3D Live Virtual Set technology developed by NewTek, which is also found in NewTek's venerable VT[5] Integrated Production Suite, the modern-day successor to the original Video Toaster.
In late 2009, NewTek released its high-definition version of the TriCaster, called the TriCaster XD300, a three-input HD system. It is able to accept a variety formats (NTSC, 720p , or 1080i ; and on multi-standard systems, PAL ) that can be mixed to downstream keys. The XD300 also features five M/E style virtual inputs, permitting up to three video sources in one source, accessible like any other input on the switcher.
At NAB Show 2010, NewTek announced its TCXD850, a rack-mountable eight-input switcher with 22 channels. It was released on July 15, 2010. [ 17 ]
By 2009, the Video Toaster started to receive less attention from NewTek in the run-up to the transition to HD systems. In December 2010, the discontinuation of VT[5] was announced, marking the end of the Video Toaster as a stand-alone product. TriCaster systems based on the VT platform were still made up until August 2012, when the TriCaster STUDIO was replaced by the TriCaster 40. This officially marked the end of the Video Toaster.
|
https://en.wikipedia.org/wiki/Video_Toaster
|
The Video Buffering Verifier (VBV) is a theoretical MPEG video buffer model, used to ensure that an encoded video stream can be correctly buffered, and played back at the decoder device.
By definition, the VBV shall not overflow nor underflow when its input is a compliant stream, (except in the case of low_delay). It is therefore important when encoding such a stream that it comply with the VBV requirements.
One way to think of the VBV is to consider both a maximum bitrate and a maximum buffer size. You'll need to know how quickly the video data is coming into the buffer. Keep in mind that video data is always changing the bitrate so there is no constant number to note how fast the data is arriving. The larger question is how long before the buffer overflows. A larger buffer size simply means that the decoder will tolerate high bitrates for longer periods of time, but no buffer is infinite, so eventually even a large buffer will overflow.
There are two operational modes of VBV: Constant Bit Rate (CBR) and Variable Bit Rate (VBR). In CBR, the decoder's buffer is filled over time at a constant data rate. In VBR, the buffer is filled at a non-constant rate. In both cases, data is removed from the buffer in varying chunks, depending on the actual size of the coded frames.
In the H.264 and VC-1 standards, the VBV is replaced with generalized version called Hypothetical Reference Decoder (HRD).
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Video_buffering_verifier
|
A video codec is software or hardware that compresses and decompresses digital video . In the context of video compression, codec is a portmanteau of encoder and decoder , while a device that only compresses is typically called an encoder , and one that only decompresses is a decoder .
The compressed data format usually conforms to a standard video coding format . The compression is typically lossy , meaning that the compressed video lacks some information present in the original video. A consequence of this is that decompressed video has lower quality than the original, uncompressed video because there is insufficient information to accurately reconstruct the original video.
There are complex relationships between the video quality , the amount of data used to represent the video (determined by the bit rate ), the complexity of the encoding and decoding algorithms, sensitivity to data losses and errors, ease of editing, random access, and end-to-end delay ( latency ).
Historically, video was stored as an analog signal on magnetic tape . Around the time when the compact disc entered the market as a digital-format replacement for analog audio, it became feasible to also store and convey video in digital form. Because of the large amount of storage and bandwidth needed to record and convey raw video, a method was needed to reduce the amount of data used to represent the raw video. Since then, engineers and mathematicians have developed a number of solutions for achieving this goal that involve compressing the digital video data.
In 1974, discrete cosine transform (DCT) compression was introduced by Nasir Ahmed , T. Natarajan and K. R. Rao . [ 1 ] [ 2 ] [ 3 ] During the late 1980s, a number of companies began experimenting with DCT lossy compression for video coding, leading to the development of the H.261 standard. [ 4 ] H.261 was the first practical video coding standard, [ 5 ] and was developed by a number of companies, including Hitachi , PictureTel , NTT , BT , and Toshiba , among others. [ 6 ] Since H.261, DCT compression has been adopted by all the major video coding standards that followed. [ 4 ]
The most popular video coding standards used for codecs have been the MPEG standards. MPEG-1 was developed by the Motion Picture Experts Group (MPEG) in 1991, and it was designed to compress VHS -quality video. It was succeeded in 1994 by MPEG-2 / H.262 , [ 5 ] which was developed by a number of companies, primarily Sony , Thomson and Mitsubishi Electric . [ 7 ] MPEG-2 became the standard video format for DVD and SD digital television . [ 5 ] In 1999, it was followed by MPEG-4 / H.263 , which was a major leap forward for video compression technology. [ 5 ] It was developed by a number of companies, primarily Mitsubishi Electric, Hitachi and Panasonic . [ 8 ]
The most widely used video coding format, as of 2016, is H.264/MPEG-4 AVC . It was developed in 2003 by a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics . [ 9 ] H.264 is the main video encoding standard for Blu-ray Discs , and is widely used by streaming internet services such as YouTube , Netflix , Vimeo , and iTunes Store , web software such as Adobe Flash Player and Microsoft Silverlight , and various HDTV broadcasts over terrestrial and satellite television.
AVC has been succeeded by HEVC (H.265), developed in 2013. It is heavily patented, with the majority of patents belonging to Samsung Electronics , GE , NTT and JVC Kenwood . [ 10 ] [ 11 ] The adoption of HEVC has been hampered by its complex licensing structure. HEVC is in turn succeeded by Versatile Video Coding (VVC).
There are also the open and free VP8 , VP9 and AV1 video coding formats, used by YouTube, all of which were developed with involvement from Google .
Video codecs are used in DVD players, Internet video , video on demand , digital cable , digital terrestrial television , videotelephony and a variety of other applications. In particular, they are widely used in applications that record or transmit video, which may not be feasible with the high data volumes and bandwidths of uncompressed video. For example, they are used in operating theaters to record surgical operations, in IP cameras in security systems, and in remotely operated underwater vehicles and unmanned aerial vehicles . Any video stream or file can be encoded using a wide variety of live video format options. Here are some of the H.264 encoder settings that need to be set when streaming to an HTML5 video player. [ 12 ]
Video codecs seek to represent a fundamentally analog data set in a digital format. Because of the design of analog video signals, which represent luminance (luma) and color information (chrominance, chroma) separately, a common first step in image compression in codec design is to represent and store the image in a YCbCr color space. The conversion to YCbCr provides two benefits: first, it improves compressibility by providing decorrelation of the color signals; and second, it separates the luma signal, which is perceptually much more important, from the chroma signal, which is less perceptually important and which can be represented at lower resolution using chroma subsampling to achieve more efficient data compression. It is common to represent the ratios of information stored in these different channels in the following way Y:Cb:Cr. Different codecs use different chroma subsampling ratios as appropriate to their compression needs. Video compression schemes for Web and DVD make use of a 4:2:1 color sampling pattern, and the DV standard uses 4:1:1 sampling ratios. Professional video codecs designed to function at much higher bitrates and to record a greater amount of color information for post-production manipulation sample in 4:2:2 and 4:4:4 ratios. Examples of these codecs include Panasonic's DVCPRO50 and DVCPROHD codecs (4:2:2), Sony's HDCAM-SR (4:4:4), Panasonic's HDD5 (4:2:2), Apple 's Prores HQ 422 (4:2:2). [ 13 ]
It is also worth noting that video codecs can operate in RGB space as well. These codecs tend not to sample the red, green, and blue channels in different ratios, since there is less perceptual motivation for doing so—just the blue channel could be undersampled.
Some amount of spatial and temporal downsampling may also be used to reduce the raw data rate before the basic encoding process. The most popular encoding transform is the 8x8 DCT. Codecs that make use of a wavelet transform are also entering the market, especially in camera workflows that involve dealing with RAW image formatting in motion sequences. This process involves representing the video image as a set of macroblocks . For more information about this critical facet of video codec design, see B-frames . [ 14 ]
The output of the transform is first quantized , then entropy encoding is applied to the quantized values. When a DCT has been used, the coefficients are typically scanned using a zig-zag scan order, and the entropy coding typically combines a number of consecutive zero-valued quantized coefficients with the value of the next non-zero quantized coefficient into a single symbol and also has special ways of indicating when all of the remaining quantized coefficient values are equal to zero. The entropy coding method typically uses variable-length coding tables . Some encoders compress the video in a multiple-step process called n-pass encoding (e.g. 2-pass), which performs a slower but potentially higher quality compression.
The decoding process consists of performing, to the extent possible, an inversion of each stage of the encoding process. [ 15 ] The one stage that cannot be exactly inverted is the quantization stage. There, a best-effort approximation of inversion is performed. This part of the process is often called inverse quantization or dequantization , although quantization is an inherently non-invertible process.
Video codec designs are usually standardized or eventually become standardized—i.e., specified precisely in a published document. However, only the decoding process need be standardized to enable interoperability. The encoding process is typically not specified at all in a standard, and implementers are free to design their encoder however they want, as long as the video can be decoded in the specified manner. For this reason, the quality of the video produced by decoding the results of different encoders that use the same video codec standard can vary dramatically from one encoder implementation to another.
A variety of video compression formats can be implemented on PCs and in consumer electronics equipment. It is therefore possible for multiple codecs to be available in the same product, reducing the need to choose a single dominant video compression format to achieve interoperability .
Standard video compression formats can be supported by multiple encoder and decoder implementations from multiple sources. For example, video encoded with a standard MPEG-4 Part 2 codec such as Xvid can be decoded using any other standard MPEG-4 Part 2 codec such as FFmpeg MPEG-4 or DivX Pro Codec , because they all use the same video format.
Codecs have their qualities and drawbacks. Comparisons are frequently published. The trade-off between compression power, speed, and fidelity (including artifacts ) is usually considered the most important figure of technical merit.
Online video material is encoded by a variety of codecs, and this has led to the availability of codec packs — a pre-assembled set of commonly used codecs combined with an installer available as a software package for PCs, such as K-Lite Codec Pack , Perian and Combined Community Codec Pack .
|
https://en.wikipedia.org/wiki/Video_codec
|
In the field of video compression , a video frame is compressed using different algorithms with different advantages and disadvantages, centered mainly around amount of data compression . These different algorithms for video frames are called picture types or frame types . The three major picture types used in the different video algorithms are I , P and B . [ 1 ] They are different in the following characteristics:
Three types of pictures (or frames) are used in video compression : I, P, and B frames.
An I‑frame ( intra-coded picture ) is a complete image, like a JPG or BMP image file.
A P‑frame (Predicted picture) holds only the changes in the image from a previous frame. For example, in a scene where a car moves across a stationary background, only the car's movements need to be encoded. The encoder does not need to store the unchanging background pixels in the P‑frame, thus saving space. P‑frames are also known as delta‑frames .
A B‑frame (Bidirectional predicted picture) saves even more space by using differences between the current frame and both the preceding and following frames to specify its content.
P and B frames are also called inter frames . The order in which the I, P and B frames are arranged is called the group of pictures . Video frames contain presentation timestamp (PTS) and decoding timestamp (DTS) values to keep the frames in correct order for decoding and displaying.
While the terms "frame" and "picture" are often used interchangeably, the term picture is a more general notion, as a picture can be either a frame or a field . A frame is a complete image, and a field is the set of odd-numbered or even-numbered scan lines composing a partial image. For example, an HD 1080 picture has 1080 lines (rows) of pixels. An odd field consists of pixel information for lines 1, 3, 5...1079. An even field has pixel information for lines 2, 4, 6...1080. When video is sent in interlaced-scan format, each frame is sent in two fields, the field of odd-numbered lines followed by the field of even-numbered lines.
A frame used as a reference for predicting other frames is called a reference frame.
Frames encoded without information from other frames are called I-frames. Frames that use prediction from a single preceding reference frame (or a single frame for prediction of each region) are called P-frames. B-frames use prediction from a (possibly weighted) average of two reference frames, one preceding and one succeeding.
In the H.264/MPEG-4 AVC standard, the granularity of prediction types is brought down to the "slice level." A slice is a spatially distinct region of a frame that is encoded separately from any other region in the same frame. I-slices, P-slices, and B-slices take the place of I, P, and B frames.
Typically, pictures (frames) are segmented into macroblocks , and individual prediction types can be selected on a macroblock basis rather than being the same for the entire picture, as follows:
Furthermore, in the H.264 video coding standard, the frame can be segmented into sequences of macroblocks called slices , and instead of using I, B and P-frame type selections, the encoder can choose the prediction style distinctly on each individual slice. Also in H.264 are found several additional types of frames/slices:
Multi‑frame motion estimation increases the quality of the video, while allowing the same compression ratio. SI and SP frames (defined for the Extended Profile) improve error correction . When such frames are used along with a smart decoder, it is possible to recover the broadcast streams of damaged DVDs.
Often, I‑frames are used for random access and are used as references for the decoding of other pictures. Intra refresh periods of a half-second are common on such applications as digital television broadcast and DVD storage. Longer refresh periods may be used in some environments. For example, in videoconferencing systems it is common to send I-frames very infrequently.
|
https://en.wikipedia.org/wiki/Video_compression_picture_types
|
Video copy detection is the process of detecting illegally copied videos by analyzing them and comparing them to original content.
The goal of this process is to protect a video creator's intellectual property.
Indyk et al. [ 1 ] produced a video copy detection theory based on the length of the film; however, it worked only for whole films without modifications. When applied to short clips of a video, Idynk et al.'s technique does not detect that the clip is a copy.
Later, [ when? ] Oostveen et al. introduced the concept of a fingerprint , or hash function , that creates a unique signature of the video based on its contents. This fingerprint is based on the length of the video and the brightness, as determined by splitting it into a grid. The fingerprint cannot be used to recreate the original video because it describes only certain features of its respective video.
Some time ago, [ when? ] B.Coskun et al. presented two robust algorithms based on discrete cosine transform .
Hampapur and Balle created an algorithm creating a global description of a piece of video based on the video's motion, color, space, [ clarification needed ] and length.
To look at the color levels of the image was thought, and for this reason, Li et al. created an algorithm that examines the colors of a clip by creating a binary signature get from the histogram of every frame. [ clarification needed ] This algorithm, however, returns inconsistent results in cases in which a logo is added to the video, because the insertion of the logo's color elements adds false information that can confuse the system.
Watermarks are used to introduce an invisible signal into a video to ease the detection of illegal copies. This technique is widely used by photographers . Placing a watermark on a video such that it is easily seen by an audience allows the content creator to detect easily whether the image has been copied.
The limitation of watermarks is that if the original image is not watermarked, then it is not possible to know whether other images are copies.
In this technique, a unique signature is created for the video on the basis of the video's content. Various video copy detection algorithms exist that use features of the video's content to assign the video a unique videohash . The fingerprint can be compared with other videohashes in a database .
This type of algorithm has a significant problem: if various aspects of the videos' contents are similar, it is difficult for an algorithm to determine whether the video in question is a copy of the original or merely similar to it. In such a case (e.g., two distinct news broadcasts ), the algorithm can return that the video in question is a copy as the news broadcast often involve similar kind of banner and presenter often sit in a similar position. Videos with very minimal changes in frames with respect to time are more vulnerable to hash collision.
The following are some algorithms and techniques proposed for video copy detection.
In this algorithm, a global intensity is defined as the sum of all intensities of all pixels weighted along all the video. Thus, an identity for a video sample can be constructed on the basis of the length of the video and the pixel intensities throughout.
The global intensity a(t) is defined as:
a ( t ) = ∑ i = 1 N K ( i ) ( I ( i , t − 1 ) ) 2 {\displaystyle a(t)=\sum _{i=1}^{N}K(i)(I(i,t-1))^{2}}
Where k is the weighting of the image, I is the image, and N is the number of pixels in the image.
In this algorithm, the video is divided in N blocks, sorted by gray level . Then it's possible to create a vector describing the average gray level of each block.
With these average levels it is possible to create a new vector S(t) , the video's signature:
S ( t ) = ( r 1 , r 2 , ⋯ , r N ) {\displaystyle S(t)=(r_{1},r_{2},\cdots ,r_{N})}
To compare two videos, the algorithm defines a D(t) representing the similarity between both.
D ( t ) = 1 T ∑ 1 = t − T 2 t + T 2 | R ( i ) − C ( i ) | {\displaystyle D(t)={\frac {1}{T}}\sum _{1=t-{\frac {T}{2}}}^{t+{\frac {T}{2}}}{\begin{vmatrix}R(i)-C(i)\end{vmatrix}}}
The value returned by D(t) helps determine whether the video in question is a copy. [ clarification needed ]
This technique was proposed by L.Chen and F. Stentiford. A measurement of dissimilarity is made by combining the two aforementioned algorithms, Global temporal descriptors and Global ordinal measurement descriptors , in time and space . [ clarification needed ]
In 2019, Facebook open sourced TMK+PDQF, [ 2 ] part of a suite of tools used at Facebook to detect harmful content. It generates a signature of a whole video, and can easily handle changes in format or added watermarks, but is less tolerant of cropping or clipping. [ 3 ]
Described by A. Joly et al., this algorithm is an improvement of Harris' Interest Points detector. [ clarification needed (what is this?) ] This technique suggests that in many videos a significant number of frames are almost identical, so it is more efficient to test not every frame but just those depicting a significant amount of motion.
ViCopT uses the interest points from each image to define a signature of the whole video. In every image, the algorithms identifies and defines two parts: the background , a set of static elements along a temporal sequence, and the motion , persistent points changing positions throughout the video.
This algorithm was developed by I. Laptev and T.Lindeberg. It uses the interest points technique along the space and time to define the video signature, and creates a 34th- dimension vector that stores this signature. [ clarification needed ]
There exist algorithms for video copy detection that are in use today. In 2007, there was an evaluation showcase known as the Multimedia Understanding Through Semantics, Computation and Learning (MUSCLE) , which tested video copy detection algorithms on various video samples ranging from home video recordings to TV show segments ranging from one minute to one hour in length.
|
https://en.wikipedia.org/wiki/Video_copy_detection
|
Video design or projection design is a creative field of stagecraft. It is concerned with the creation and integration of film, motion graphics and live camera feed into the fields of theatre , opera , dance , fashion shows , concerts and other live events. Video design has only recently gained recognition as a separate creative field becoming an integral tool for engagement and learning while spanning its influence to different realms of intellects such as education. A review conducted by 113 peers between 1992 and 2021 revealed a marked increase in research on video design principles, particularly after 2008. This surge correlates with the proliferation of platforms like YouTube , which have popularized video-based learning. [ 1 ] The United Scenic Artists' Local 829, a union representing designers and scenic artists in the US entertainment industry, added the Global Projection Designer membership category in 2007. [ 2 ] Prior to this, the responsibilities of video design would often be taken on by a scenic designer or lighting designer . A person who practices the art of video design is often known as a Video Designer . However, naming conventions vary worldwide, so practitioners may also be credited as Projection Designer , "Media Designer", Cinematographer or Video Director (amongst others). As a relatively new field of stagecraft, practitioners create their own definitions, rules and techniques. [ 3 ]
Filmmaking and video production content has been used in performance for many years, [ 4 ] as has large format slide projection delivered by systems such as the PANI projector. [ 5 ] The German Erwin Piscator , as stage director at the Berlin Volksbühne in the 1920s, made extensive use of film projected onto his sets. [ 6 ] However, the development of digital projection technology in the mid 90s, and the resulting drop in price, made it more attractive and practical to live performance producers, directors and scenic designers. The role of the video designer has developed as a response to this, and in recognition of the demand in the industry for experienced professionals to handle the video content of a production. [ 7 ]
United Scenic Artists ' Local 829, the Union representing Scenic Artists in the USA has included "Projection Designers" as of mid- 2007. [ 8 ] This means anybody working in this field will be doing so officially as "Projection Designer" if he or she is working under a union contract, even if the design utilizes technology other than video projectors . The term "Projection Designer" stems from the days when slide and film projectors were the primary projection source and is now in wide use across North America.
MA Digital Theatre , University of the Arts London is the first Master's level course in the UK designed to teach video design exclusively as a specific discipline, rather than embedding it into scenic design.
Also, Opera Academy Verona has a Workshop Laboratory from 2009 of Projection Design for Opera and Theatre, Directed from Carlo Saleti, Gianfranco Veneruci and Florian CANGA.
In the USA, a number of programs started at about the same time reflecting the growing acceptance of the profession and the need for skilled projection designers. Yale University began a graduate level program in Projection Design in 2010., [ 9 ] It's being headed by Wendall K. Harrington . CalArts had their concentration Video For Performance [1] since the mid-2000s and is currently led by Peter Flaherty while UT Austin started the MFA concentration Integrated Media for Live Performance also in 2010. It is being led by the Sven Ortel. Both the UT Austin and Yale program are part of an MFA in Design and graduated their first students in 2013.
These component of video design serves as a basic foundation for developing a theatrical play, that mesmerizing and enhances audience's sensory experiences. They include: Environment, Color, Space, Scale, Movement and Sound design
This is the canvas video designers are faced with when constructing a compelling story to the audience, as Miroslaw Rogala 's describes in her article "Nature Is Leaving Us: A Video Theatre Work", "by implicit contract with the audience, I am promising them a vaster canvas than their predetermined notions of television; I am therefore demanding more from them in terms of their attention and engagement. [ 10 ] "By harnessing the physical 2D layer of video projection, designers have the ability to construct a visual field where their artwork is a living-breathing physical manifestation of their idea.
This component of video design is describes manipulation of perspective of a play. Rogala breaks these perspectives into 3 namely, "frog-eye-view", "human view" and "bird-eye-view". By tilting the projection or camera along an axis, the designer manipulates the views to create invoke imbalances or invoke an emotion to the audience. [ 11 ]
This is a tool used by a video designer to fit a video projection to multiple screens (or video walls); ranging from small, intimate displays to large video-walls. Doing so, shift the audience's perception of proximity , presence , and importance . Manipulation of scale, allows video designers to disrupt the realism creating a constructive views that contributes to the overall play. According to Rogala, "By altering the scale of the projected images—from close-up facial expressions to full-body silhouettes—we shift the viewer’s perception of spatial relationships and intimacy." [ 12 ]
This is a tool used by video designers to invoke emotional response from the audience.According to Parker-Starbuck, "The projected image does not merely serve as a backdrop or setting, but becomes a performative element that interacts with the live body, space, and time, thereby challenging traditional notions of theatrical presence." [ 13 ] The use of color in this context serves as a means of creating an immersive experience which ultimately influences audiences' emotion.
This is a multilayered tool not constrained by just a performers body movements, it extends to camera movements , projection movement as well as transitions medias . as Rogala puts it: "Movement in Nature Is Leaving Us is not confined to the body. It is distributed across multiple visual planes — live, recorded, and projected — producing a spatial rhythm that defies theatrical gravity." [ 14 ] It is used by video designers to create fluid, nonlinear experiences to the audience."The transitions between live action and mediated movement are seamless, allowing the viewer to experience a choreography of perception as much as of bodies." [ 15 ]
In Nature is leaving us, sound is not treated as just a background or temporal filler. Instead, it is used as a tool to for shaping temporal rhythm and psychological tones. Manipulation of this tool induces a heighten sensory reception of the audience. As Rogala puts it, "digitally altered voices, sampled sounds, and non-linear loops envelop the viewer in a sonic architecture that resists narrative cohesion." [ 16 ]
Depending on the production, and due to the crossover of this field with the fields of lighting design and scenic design , a video designer's roles and responsibilities may vary from show to show. A video designer may take responsibility for any or all of the following.
This is a very wide skills base, and it is not uncommon for a video designer to work with associates or assistants who can take responsibility for certain areas. For example, a video designer may conceptually design the video content, but hire a skilled animator to create it, a programmer to program the control system, a production engineer to designer and engineer the control system and a projectionist to choose the optimum projection positions and maintain the equipment.
Concert video design is a niche of the filmmaking and video production industry that involves the creation of original video content intended explicitly for display during a live concert performance.
The creation of visuals for live music performances bears close resemblance to music videos , but are typically meant to be displayed as 'backplate' imagery that adds a visual component to the music performed onstage. However, as the use of video content during musical performances has grown in popularity since the turn of the 21st century, it has become more common to have self-standing 'introductory' and 'interstitial' videos that play on screen on stage without the performers. These pieces may include footage of the artist or artists, shot specifically for the video, and presented onstage with pre-recorded music so that the final appearance is essentially a music video. Such stand-alone videos, however, are typically only viewed in this live setting and may include additional theatrical sound effects .
The earliest concert video visuals likely date to the late 1960s when concerts for artists such as Jimi Hendrix and The Doors featured psychedelic imagery on projection screens suspended behind the performers. Live concert performances took on more and more theatrical elements particularly notable in the concert events put on by Pink Floyd throughout their career.
Laurie Anderson was among the earliest to experiment with video content as part of a live performance, and her ideas and images were a direct inspiration to performers as diverse as David Bowie , Madonna and Kanye West . In 1982, Devo integrated rear-projected visuals into their concert set, choreographing themselves to match and interact with the action on the video for several songs, but the concert that made video content 'standard practice' was the 1993 U2 's Zoo TV Tour , conceived and designed by production designer Willie Williams , a collaborator of Laurie Anderson's .
Video designers make use of many technologies from the fields of stagecraft , broadcast equipment and home cinema equipment to build a workable video system, including technologies developed specifically for live video and technologies appropriated from other fields. A video system may include any of the following:
|
https://en.wikipedia.org/wiki/Video_design
|
A game programmer is a software engineer , programmer , or computer scientist who primarily develops codebases for video games or related software , such as game development tools . Game programming has many specialized disciplines, all of which fall under the umbrella term of "game programmer". [ 1 ] [ 2 ] A game programmer should not be confused with a game designer , who works on game design . [ 3 ]
In the early days of video games (from the early 1970s to mid-1980s), a game programmer also took on the job of a designer and artist . This was generally because the abilities of early computers were so limited that having specialized personnel for each function was unnecessary. Game concepts were generally light and games were only meant to be played for a few minutes at a time, but more importantly, art content and variations in gameplay were constrained by computers' limited power.
Later, as specialized arcade hardware and home systems became more powerful, game developers could develop deeper storylines and could include such features as high-resolution and full color graphics, physics , advanced artificial intelligence and digital sound . Technology has advanced to such a great degree that contemporary games usually boast 3D graphics and full motion video using assets developed by professional graphic artists . Nowadays, the derogatory term " programmer art " has come to imply the kind of bright colors and blocky design that were typical of early video games.
The desire for adding more depth and assets to games necessitated a division of labor . Initially, art production was relegated to full-time artists . Next game programming became a separate discipline from game design . Now, only some games, such as the puzzle game Bejeweled , are simple enough to require just one full-time programmer. Despite this division, however, most game developers (artists, programmers and even producers ) have some say in the final design of contemporary games.
A contemporary video game may include advanced physics, artificial intelligence, 3D graphics, digitised sound, an original musical score, complex strategy and may use several input devices (such as mice , keyboards , gamepads and joysticks ) and may be playable against other people via the Internet or over a LAN . Each aspect of the game can consume all of one programmer's time and, in many cases, several programmers. Some programmers may specialize in one area of game programming , but many are familiar with several aspects. The number of programmers needed for each feature depends somewhat on programmers' skills, but mostly are dictated by the type of game being developed.
Game engine programmers create the base engine of the game, including the simulated physics and graphics disciplines. [ 4 ] Increasingly, video games use existing game engines , either commercial, open source or free . They are often customized for a particular game, and these programmers handle these modifications.
A game's physics programmer is dedicated to developing the physics a game will employ. [ 5 ] Typically, a game will only simulate a few aspects of real-world physics. For example, a space game may need simulated gravity , but would not have any need for simulating water viscosity .
Since processing cycles are always at a premium, physics programmers may employ "shortcuts" that are computationally inexpensive, but look and act "good enough" for the game in question. In other cases, unrealistic physics are employed to allow easier gameplay or for dramatic effect. Sometimes, a specific subset of situations is specified and the physical outcome of such situations are stored in a record of some sort and are never computed at runtime at all.
Some physics programmers may even delve into the difficult tasks of inverse kinematics and other motions attributed to game characters, but increasingly these motions are assigned via motion capture libraries so as not to overload the CPU with complex calculations.
Historically, this title usually belonged to a programmer who developed specialized blitter algorithms and clever optimizations for 2D graphics . Today, however, it is almost exclusively applied to programmers who specialize in developing and modifying complex 3D graphic renderers. Some 2D graphics skills have just recently become useful again, though, for developing games for the new generation of cell phones and handheld game consoles .
A 3D graphics programmer must have a firm grasp of advanced mathematical concepts such as vector and matrix math, quaternions and linear algebra .
Skilled programmers specializing in this area of game development can demand high wages and are usually a scarce commodity. [ citation needed ] Their skills can be used for video games on any platform .
An AI programmer develops the logic of time to simulate intelligence in enemies and opponents. [ 6 ] It has recently evolved into a specialized discipline, as these tasks used to be implemented by programmers who specialized in other areas. An AI programmer may program pathfinding , strategy and enemy tactic systems. This is one of the most challenging aspects of game programming and its sophistication is developing rapidly. Contemporary games dedicate approximately 10 to 20 percent of their programming staff to AI. [ 7 ]
Some games, such as strategy games like Civilization III or role-playing video games such as The Elder Scrolls IV: Oblivion , use AI heavily, while others, such as puzzle games , use it sparingly or not at all. Many game developers have created entire languages that can be used to program their own AI for games via scripts . These languages are typically less technical than the language used to implement the game, and will often be used by the game or level designers to implement the world of the game. Many studios also make their games' scripting available to players, and it is often used extensively by third party mod developers .
The AI technology used in games programming should not be confused with academic AI programming and research. Although both areas do borrow from each other, they are usually considered distinct disciplines, though there are exceptions. For example, the 2001 game by Lionhead Studios Black & White features a unique AI approach to a user controlled creature who uses learning to model behaviors during game-play. [ 8 ] In recent years, more effort has been directed towards intervening promising fields of AI research and game AI programming. [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Not always a separate discipline, sound programming has been a mainstay of game programming since the days of Pong . Most games make use of audio, and many have a full musical score. Computer audio games eschew graphics altogether and use sound as their primary feedback mechanism. [ 13 ]
Many games use advanced techniques such as 3D positional sound , making audio programming a non-trivial matter. With these games, one or two programmers may dedicate all their time to building and refining the game's sound engine, and sound programmers may be trained or have a formal background in digital signal processing .
Scripting tools are often created or maintained by sound programmers for use by sound designers . These tools allow designers to associate sounds with characters, actions, objects and events while also assigning music or atmospheric sounds for game environments (levels or areas) and setting environmental variables such as reverberation.
Though all programmers add to the content and experience that a game provides, a gameplay programmer focuses more on a game's strategy, implementation of the game's mechanics and logic, and the "feel" of a game. This is usually not a separate discipline, as what this programmer does usually differs from game to game, and they will inevitably be involved with more specialized areas of the game's development such as graphics or sound.
This programmer may implement strategy tables, tweak input code, or adjust other factors that alter the game. Many of these aspects may be altered by programmers who specialize in these areas, however (for example, strategy tables may be implemented by AI programmers).
In early video games, gameplay programmers would write code to create all the content in the game—if the player was supposed to shoot a particular enemy, and a red key was supposed to appear along with some text on the screen, then this functionality was all written as part of the core program in C or assembly language by a gameplay programmer.
More often today the core game engine is usually separated from gameplay programming. This has several development advantages. The game engine deals with graphics rendering, sound, physics and so on while a scripting language deals with things like cinematic events, enemy behavior and game objectives. Large game projects can have a team of scripters to implement these sorts of game content.
Scripters usually are also game designers. It is often easier to find a qualified game designer who can be taught a script language as opposed to finding a qualified game designer who has mastered C++ .
This programmer specializes in programming user interfaces (UIs) for games. [ 14 ] Though some games have custom user interfaces, this programmer is more likely to develop a library that can be used across multiple projects. Most UIs look 2D, though contemporary UIs usually use the same 3D technology as the rest of the game so some knowledge of 3D math and systems is helpful for this role. Advanced UI systems may allow scripting and special effects, such as transparency, animation or particle effects for the controls.
Input programming, while usually not a job title, or even a full-time position on a particular game project, is still an important task. This programmer writes the code specifying how input devices such as a keyboard , mouse or joystick affect the game. These routines are typically developed early in production and are continually tweaked during development. Normally, one programmer does not need to dedicate his entire time to developing these systems. A real-time motion-controlled game utilizing devices such as the Wii Remote or Kinect may need a very complex and low latency input system, while the HID requirements of a mouse-driven turn-based strategy game such as Heroes of Might and Magic are significantly simpler to implement.
This programmer writes code that allows players to compete or cooperate, connected via a LAN or the Internet (or in rarer cases, directly connected via modem ). [ 15 ] Programmers implementing these game features can spend all their time in this one role, which is often considered one of the most technically challenging. Network latency , packet compression, and dropped or interrupted connections are just a few of the concerns one must consider. Although multi-player features can consume the entire production timeline and require the other engine systems to be designed with networking in mind, network systems are often put off until the last few months of development, adding additional difficulties to this role. Some titles have had their online features (often considered lower priority than the core gameplay) cut months away from release due to concerns such as lack of management, design forethought, or scalability. Virtua Fighter 5 for the PS3 is a notable example of this trend. [ 16 ]
The tools programmer [ 17 ] can assist the development of a game by writing custom tools for it. Game development Tools often contain features such as script compilation, importing or converting art assets, and level editing. While some tools used may be COTS products such as an IDE or a graphics editor, tools programmers create tools with specific functions tailored to a specific game which are not available in commercial products. For example, an adventure game developer might need an editor for branching story dialogs , and a sport game developer could use a proprietary editor to manage players and team stats. These tools are usually not available to the consumers who buy the game.
Porting a game from one platform to another has always been an important activity for game developers. Some programmers specialize in this activity, converting code from one operating system to work on another. Sometimes, the programmer is responsible for making the application work not for just one operating system, but on a variety of devices, such as mobile phones . Often, however, "porting" can involve re-writing the entire game from scratch as proprietary languages , tools or hardware make converting source code a fruitless endeavour.
This programmer must be familiar with both the original and target operating systems and languages (for example, converting a game originally written in C++ to Java ), convert assets, such as artwork and sounds or rewrite code for low memory phones. This programmer may also have to side-step buggy language implementations, some with little documentation, refactor code , oversee multiple branches of code, rewrite code to scale for wide variety of screen sizes and implement special operator guidelines. They may also have to fix bugs that were not discovered in the original release of a game.
The technology programmer is more likely to be found in larger development studios with specific departments dedicated solely to R&D . Unlike other members of the programming team, the technology programmer usually isn't tied to a specific project or type of development for an extended length of time, and they will typically report directly to a CTO or department head rather than a game producer. As the job title implies, this position is extremely demanding from a technical perspective and requires intimate knowledge of the target platform hardware. Tasks cover a broad range of subjects including the practical implementation of algorithms described in research papers, very low-level assembly optimization and the ability to solve challenging issues pertaining to memory requirements and caching issues during the latter stages of a project. There is considerable amount of cross-over between this position and some of the others, particularly the graphics programmer.
In smaller teams, one or more programmers will often be described as 'Generalists' who will take on the various other roles as needed. Generalists are often engaged in the task of tracking down bugs and determining which subsystem expertise is required to fix them.
The lead programmer is ultimately in charge of all programming for the game. It is their job to make sure the various submodules of the game are being implemented properly and to keep track of development from a programming standpoint. A person in this role usually transitions from other aspects of game programming to this role after several years of experience. Despite the title, this person usually has less time for writing code than other programmers on the project as they are required to attend meetings and interface with the client or other leads on the game. However, the lead programmer is still expected to program at least some of the time and is also expected to be knowledgeable in most technical areas of the game. There is often considerable common ground in the role of technical director and lead programmer, such that the jobs are often covered by one person.
Game programmers can specialize on one platform or another, such as the Wii U or Windows . So, in addition to specializing in one game programming discipline, a programmer may also specialize in development on a certain platform. Therefore, one game programmer's title might be "PlayStation 3 3D Graphics Programmer." Some disciplines, such as AI, are transferable to various platforms and needn't be tailored to one system or another. Also, general game development principles such as 3D graphics programming concepts, sound engineering and user interface design are transferable between platforms.
Notably, there are many game programmers with no formal education in the subject, having started out as hobbyists and doing a great deal of programming on their own, for fun, and eventually succeeding because of their aptitude and homegrown experience. However, most job solicitations for game programmers specify a bachelor's degree (in mathematics, physics, computer science, "or equivalent experience").
Increasingly, universities are starting to offer courses and degrees in game programming. Any such degrees have considerable overlap with computer science and software engineering degrees. [ citation needed ]
Salaries for game programmers vary from company to company and country to country. In general, however, pay for game programming is generally about the same for comparable jobs in the business sector. This is despite the fact that game programming is some of the most difficult of any type and usually requires longer hours than mainstream programming.
Results of a 2010 survey in the United States indicate that the average salary for a game programmer is USD $95,300 annually. The least experienced programmers, with less than 3 years of experience, make an average annual salary of over $72,000. The most experienced programmers, with more than 6 years of experience, make an average annual salary of over $124,000. [ 18 ]
Generally, lead programmers are the most well compensated, though some 3D graphics programmers may challenge or surpass their salaries. According to the same survey above, lead programmers on average earn $127,900 annually. [ 19 ]
Though sales of video games rival other forms of entertainment such as movies , the video game industry is extremely volatile. Game programmers are not insulated from this instability as their employers experience financial difficulty.
Third-party developers, the most common type of video game developers , depend upon a steady influx of funds from the video game publisher . If a milestone or deadline is not met (or for a host of other reasons, like the game is cancelled), funds may become short and the developer may be forced to retrench employees or declare bankruptcy and go out of business. Game programmers who work for large publishers are somewhat insulated from these circumstances, but even the large game publishers can go out of business (as when Hasbro Interactive was sold to Infogrames and several projects were cancelled; or when The 3DO Company went bankrupt in 2003 and ceased all operations). Some game programmers' resumes consist of short stints lasting no more than a year as they are forced to leap from one doomed studio to another. [ 20 ] This is why some prefer to consult and are therefore somewhat shielded from the effects of the fates of individual studios.
Most commercial computer and video games are written primarily in C++ , C , and some assembly language . Many games, especially those with complex interactive gameplay mechanics, tax hardware to its limit. As such, highly optimized code is required for these games to run at an acceptable frame rate. Because of this, compiled code is typically used for performance-critical components, such as visual rendering and physics calculations. Almost all PC games also use either the DirectX , OpenGL APIs or some wrapper library to interface with hardware devices.
Various script languages , like Ruby , Lua and Python , are also used for the generation of content such as gameplay and especially AI. Scripts are generally parsed at load time (when the game or level is loaded into main memory) and then executed at runtime (via logic branches or other such mechanisms). They are generally not executed by an interpreter , which would result in much slower execution. Scripts tend to be used selectively, often for AI and high-level game logic. Some games are designed with high dependency on scripts and some scripts are compiled to binary format before game execution. In the optimization phase of development, some script functions will often be rewritten in a compiled language.
Java is used for many web browser based games because it is cross-platform , does not usually require installation by the user, and poses fewer security risks, compared to a downloaded executable program. Java is also a popular language for mobile phone based games. Adobe Flash , which uses the ActionScript language, and JavaScript are popular development tools for browser-based games.
As games have grown in size and complexity, middleware is becoming increasingly popular within the industry. Middleware provides greater and higher level functionality and larger feature sets than the standard lower level APIs such as DirectX and OpenGL , such as skeletal animation . In addition to providing more complex technologies, some middleware also makes reasonable attempts to be platform independent , making common conversions from, for example, Microsoft Windows to PS4 much easier. Essentially, middleware is aimed at cutting out as much of the redundancy in the development cycle as possible (for example, writing new animation systems for each game a studio produces), allowing programmers to focus on new content.
Other tools are also essential to game developers: 2D and 3D packages (for example Blender , GIMP , Photoshop , Maya or 3D Studio Max ) enable programmers to view and modify assets generated by artists or other production personnel. Source control systems keep source code safe, secure and optimize merging. IDEs with debuggers (such as Visual Studio ) make writing code and tracking down bugs a less painful experience.
|
https://en.wikipedia.org/wiki/Video_game_programmer
|
Video optimization refers to a set of technologies used by mobile service providers to improve consumer viewing experience by reducing video start times or re- buffering events. The process also aims to reduce the amount of network bandwidth consumed by video sessions. [ 1 ]
While optimization technology can be applied to videos played on a variety of media-consuming devices, the costliness of mobile streaming and increase in mobile video viewers has created a very high demand for optimization solutions among mobile service providers. [ 2 ]
When streaming over-the-top (OTT) content and video on demand, systems do not typically recognize the specific size, type, and viewing rate of the video being streamed. Video sessions, regardless of the rate of views, are each granted the same amount of bandwidth . This bottlenecking of content results in longer buffering time and poor viewing quality. [ 3 ] Some solutions, such as upLynk and Skyfire ’s Rocket Optimizer, attempt to resolve this issue by using cloud -based solutions to adapt and optimize over-the-top content. [ 4 ] [ 5 ] [ 6 ]
The spike in mobile video streaming has come about as a result of the development of the smartphone . Smartphones registered a 5% to 40% market penetration between 2007 and 2010 in the United States. [ 7 ] In the third quarter of 2011, smartphone sales increased by 42% from 2010. [ 8 ]
Mobile operators are facing an explosion in wireless data use, which is projected to grow 18-fold from 2011 to 2016 per the latest Cisco VNI forecast. [ 9 ]
With the use of mobile devices increasing so rapidly, and almost half of the traffic on mobile internet networks being accounted for by video sessions, mobile service providers have begun to recognize the need to provide higher quality video access while using the lowest possible bandwidth. [ 8 ] [ 10 ]
With the release of the iPhone 5 in September 2012, it has been predicted that LTE networks might experience decreased data speeds as streaming multimedia begins to tax the 4G network. [ 11 ] Cloud-based content optimizers that reduce the strain of over-the-top multimedia streaming could provide potential relief to mobile providers. [ 4 ] [ 12 ]
Since 2009, multiple solutions have been applied to the issue of video optimization. [ 13 ]
A variety of techniques used for reducing traffic over a mobile network infrastructure is called pacing. Pacing is a special form of rate limiting , where traffic delivery to a device is slowed down to a point, that "just in time" delivery takes place. The idea behind pacing is to avoid traffic bursts and even the data flow. If an object is delivered in its entirety, pacing provides no benefit. Where pacing can offer savings is when the object is "abandoned" part way through. When abandonment occurs, the portion of the object left in the receiving device buffer is effectively wasted. [ 3 ] [ 14 ]
Another technique used in video optimization is known as video transrating, which involves modifying the video input stream. This modification is accomplished through an analysis of either "content" (to determine if bit rate on a particular video can be lowered without altering viewing quality), "device" (to recognize a specific streaming device and reduce bit rate based on resolution and screen size ), or "network" (in which conditions of the network are estimated and adjustments in bit-rate are made to accommodate to varying network speeds without detracting from viewing experience). Average transrating savings are typically less than 30% per video. [ 3 ] Transrating only allows modification to video quantization parameters and does not allow for modifications to the video resolution, codec , and other parameters. [ 15 ]
In contrast to transrating, transcoding converts data from one encoding to another. The two-step process of decoding and recoding digital media is typically performed to accommodate for specific target devices or workflows , but it can also be utilized for low-grade streaming optimization. [ 15 ] [ 16 ]
Full transcoding offers optimization rates of 60-80% per video by completely decoding and recoding digital media while allowing for changes in codec and resolution. [ 3 ] [ 13 ] [ 15 ] The flexible conversion techniques associated with full transcoding result in higher optimization savings without impacting the quality of the original media. [ 16 ] [ 17 ] While full transcoding is more taxing on central processing units than transrating, there are cloud-based solutions, such as Skyfire, that allow network architectures with inexpensive CPUs to utilize full transcoding. [ 4 ] [ 15 ] [ 16 ]
Adaptive bitrate (ABR) video streaming technology was implemented to solve some of the challenges with streaming high bitrate videos.
Videos streamed using traditional formats such as progressive download and RTSP have a common challenge; any given video must be encoded at a specific target bitrate (e.g., 500 kbps ) – and that is the bitrate regardless of the access network over which it is delivered.
If the chosen target bitrate is too high, the video will not be delivered smoothly over lower-speed networks and there will be slow start times and re-buffering throughout the video. Even on fast networks like LTE 4G, slow start times and re-buffering will occur during times of congestion or high network utilization.
If the chosen bitrate is low, on the other hand, the video quality will be lower – thereby reducing the customer’s quality of experience.
There are a number of ways of dealing with these challenges. One way is to take the YouTube approach. YouTube uses HTTP progressive download, and makes multiple versions of the video available at different resolutions and bitrates. Users themselves can then select the quality / bitrate that works best for them. If stalling or rebuffering occur, then can select the next lower resolution and continue viewing the video.
Adaptive bitrate effectively automates these resolution / quality adjustments on behalf of the user. Each ABR video is encoded at multiple bitrates, each broken into "chunks" of varying lengths (e.g., Apple ’s HTTP Live Streaming generally uses 10s chunks). If network bandwidth is insufficient to deliver the video at this bitrate, the client will request the next "chunk" to be at a lower bitrate; quality of video will be reduced, but re-buffering will be avoided. Conversely, if the network can deliver at higher than the current bitrate, the client will request the next chunk to be at a higher bitrate, and quality will increase. [ 18 ] [ 19 ] [ 20 ]
A newer approach is to perform video and multimedia optimization in "the cloud" – data centers either operated by the service provider or by a third party. The major benefits of this technique are that it allows smoother bit rate adaptation and utilizes transcoding and caching methods to distribute resources only when and where they are needed. [ 14 ]
|
https://en.wikipedia.org/wiki/Video_optimization
|
ViLTE , an acronym for " Video over LTE ", is a conversational (i.e. person to person) video service based on the IP Multimedia Subsystem (IMS) core network like VoLTE. It has specific profiles for the control and VoLTE of the video service and uses LTE as the radio access medium. The service as a whole is governed by the GSM Association in PRD IR.94. [ 1 ] [ 2 ]
ViLTE uses the same control plane protocol as Voice over LTE ( VoLTE ), namely the Session Initiation Protocol ( SIP ). The IMS core network along with the applicable Application Server ( AS ) performs the call control. ViLTE uses the H.264 codec to encode and decode the video stream. [ 3 ] The H.264 codec delivers superior quality as compared to the low bit rate 3G-324M codec that is used in 3G conversational video calls.
It is vital that ViLTE video calls are allocated appropriate quality of service (QoS) to differentiate and prioritize this delay and jitter sensitive conversational traffic from other streaming video traffic that is not as delay or jitter sensitive. The mechanism used is called QoS Class Identifier (QCI). The ViLTE bearer traffic is typically allocated QCI=2, and the SIP-based IMS signalling QCI=5. [ 4 ]
As of February 2019 the Global Mobile Suppliers Association had identified 257 devices, virtually all of them phones, supporting ViLTE technology. [ 5 ] By August, continued momentum had seen the number of identified devices increase to 390. [ 6 ]
Many of the world’s largest handset vendors now have ViLTE capable devices on the market. As of August 2019, ViLTE devices were offered by 46 vendors/brands including Askey, BBK Electronics, Blackberry, Casper, Celkon, CENTRiC, Comio, Foxconn, General Mobile, GiONEE, HMD, HTC, Huaquin Telecom Technology, Huawei, Infinix, Infocus, Intex, Itel, Karbonn, Kult, Lanix, Lava, Lenovo, LG, LYF (Reliance Digital), Micromax, Mobiistar, Motorola, Panasonic, Reach, Samsung, Sonim, Sony Mobile, Spice Devices, Swipe Technologies, TCL, Tecno, Vestel, Xiaomi, YU (Micromax), Yulong Computer, Ziox, and ZTE. iPhones don’t support ViLTE. [ 7 ]
|
https://en.wikipedia.org/wiki/Video_over_LTE
|
Video over cellular ( VoC ), also known as VoCIP ( video over cellular Internet Protocol ) , is a term used for processing streaming video such as surveillance , using high- resolution video cameras over 3G and 4G cellular networks . Creating a VoC transmission requires encoding and decoding of video packets of data. The method of transport over a cellular packet switched network such as EvDO , HSPA , LTE or WiMax have been restricted to a standard five- gigabyte monthly limit of data from the carrier. [ 1 ]
In 2009, VoC solutions are now used in applications for public safety and for TV broadcasting , using traditional wireless carriers such as Verizon Wireless , Sprint Nextel and AT&T Mobility that support 3G and 4G wireless broadband speeds. Public-safety organizations are harnessing this technology to support police and sheriff special forces such as SWAT and SERT programs that require covert video surveillance, without the wires previously required in traditional surveillance solutions, providing high-definition streaming video.
VoC is also rapidly gaining use in electronic news-gathering and remote broadcasting , where it is used in place of remote pickup units (RPUs), which must be temporarily fixed in a certain position, with an antenna often on a telescoping pole or satellite dish atop an outside broadcast van . RPUs are also limiting in terms of setup time and space, safety regarding overhead powerlines , and the requirement for a line of sight back to the TV station or a remote receiving antenna.
VoC now allows reporters to transmit live from moving vehicles , and is now frequently used for storm chasing and reporting live from the scene of other still- breaking news stories. This may be as simple as smartphone apps like FaceTime or other off-the-shelf solutions, or dedicated or proprietary solutions housed in a small backpack transceiver unit worn by the cameraman . However, cellular service may be degraded or completely unavailable in the event of a mass call event or widespread power outage , both frequently caused by a disaster .
|
https://en.wikipedia.org/wiki/Video_over_cellular
|
A video search engine is a web-based search engine which crawls the web for video content. Some video search engines parse externally hosted content while others allow content to be uploaded and hosted on their own servers. Some engines also allow users to search by video format type and by length of the clip. The video search results are usually accompanied by a thumbnail view of the video.
Video search engines are computer programs designed to find videos stored on digital devices, either through Internet servers or in storage units from the same computer. These searches can be made through audiovisual indexing , which can extract information from audiovisual material and record it as metadata, which will be tracked by search engines.
The main use of these search engines is the increasing creation of audiovisual content and the need to manage it properly. The digitization of audiovisual archives and the establishment of the Internet, has led to large quantities of video files stored in big databases, whose recovery can be very difficult because of the huge volumes of data and the existence of a semantic gap.
The search criterion used by each search engine depends on its nature and purpose of the searches.
Metadata is information about facts. It could be information about who is the author of the video, creation date, duration, and all the information that could be extracted and included in the same files. Internet is often used in a language called XML to encode metadata, which works very well through the web and is readable by people. Thus, through this information contained in these files is the easiest way to find data of interest to us.
In the videos there are two types of metadata, that we can integrate in the video code itself and external metadata from the page where the video is. In both cases we optimize them to make them ideal when indexed.
All video formats incorporate their own metadata. The title, description, coding quality or transcription of the content are possible. To review these data exist programs like FLV MetaData Injector, Sorenson Squeeze or Castfire. Each one has some utilities and special specifications.
Converting from one format to another can lose much of this data, so check that the new format information is correct. It is therefore advisable to have the video in multiple formats, so all search robots will be able to find and index it.
In most cases the same mechanisms must be applied as in the positioning of an image or text content.
They are the most important factors when positioning a video, because they contain most of the necessary information. The titles have to be clearly descriptive and should remove every word or phrase that is not useful.
It should be descriptive, including keywords that describe the video with no need to see their title or description. Ideally, separate the words by dashes "-".
On the page where the video is, it should be a list of keywords linked to the microformat "rel-tag". These words will be used by search engines as a basis for organizing information.
Although not completely standard, there are two formats that store information in a temporal component that is specified, one for subtitles and another for transcripts, which can also be used for subtitles. The formats are SRT or SUB for subtitles and TTXT for transcripts.
Speech recognition consists of a transcript of the speech of the audio track of the videos, creating a text file. In this way and with the help of a phrase extractor can easily search if the video content is of interest. Some search engines apart from using speech recognition to search for videos, also use it to find the specific point of a multimedia file in which a specific word or phrase is located and so go directly to this point. Gaudi (Google Audio Indexing), a project developed by Google Labs , uses voice recognition technology to locate the exact moment that one or more words have been spoken within an audio, allowing the user to go directly to exact moment that the words were spoken. If the search query matches some videos from YouTube, the positions are indicated by yellow markers, and must pass the mouse over to read the transcribed text.
In addition to transcription, analysis can detect different speakers and sometime attribute the speech to an identified name for the speaker.
The text recognition can be very useful to recognize characters in the videos through "chyrons". As with speech recognizers, there are search engines that allow (through character recognition) to play a video from a particular point.
TalkMiner, an example of search of specific fragments from videos by text recognition, analyzes each video once per second looking for identifier signs of a slide, such as its shape and static nature, captures the image of the slide and uses Optical Character Recognition (OCR) to detect the words on the slides. Then, these words are indexed in the search engine of TalkMiner, which currently offers to users more than 20,000 videos from institutions such as Stanford University, the University of California at Berkeley, and TED.
Through the visual descriptors we can analyze the frames of a video and extract information that can be scored as metadata. Descriptions are generated automatically and can describe different aspects of the frames, such as color, texture, shape, motion, and the situation.
The video analysis can lead to automatic chaptering, using technics such as change of camera angle, identification of audio jingles. By knowing the typical structure of a video document, it is possible to identify starting and ending credits, content parts and beginning and ending of advertising breaks.
The usefulness of a search engine depends on the relevance of the result set returned. While there may be millions of videos that include a particular word or phrase, some videos may be more relevant, popular or have more authority than others. This arrangement has a lot to do with search engine optimization.
Most search engines use different methods to classify the results and provide the best video in the first results. However, most programs allow sorting the results by several criteria.
This criterion is more ambiguous and less objective, but sometimes it is the closest to what we want; depends entirely on the searcher and the algorithm that the owner has chosen. That's why it has always been discussed and now that search results are so ingrained into our society it has been discussed even more. This type of management often depends on the number of times that the searched word comes out, the number of viewings of this, the number of pages that link to this content and ratings given by users who have seen it. [ 1 ]
This is a criterion based totally on timeline. Results can be sorted according to their seniority in the repository.
It can give us an idea of the popularity of each video.
This is the length of the video and can give a taste of which video it is.
It is common practice in repositories let the users rate the videos, so that a content of quality and relevance will have a high rank on the list of results gaining visibility. This practice is closely related to virtual communities.
We can distinguish two basic types of interfaces, some are web pages hosted on servers which are accessed by Internet and searched through the network, and the others are computer programs that search within a private network.
Within Internet interfaces we can find repositories that host video files which incorporate a search engine that searches only their own databases, and video searchers without repository that search in sources of external software.
Provides accommodation in video files stored on its servers and usually has an integrated search engine that searches through videos uploaded by its users. One of the first web repositories, or at least the most famous are the portals Vimeo, Dailymotion and YouTube.
Their searches are often based on reading the metadata tags, titles and descriptions that users assign to their videos. The disposal and order criterion of the results of these searches are usually selectable between the file upload date, the number of viewings or what they call the relevance. Still, sorting criteria are nowadays the main weapon of these websites, because the positioning of videos is important in terms of promotion. [ citation needed ]
They are websites specialized in searching videos across the network or certain pre-selected repositories. They work by web spiders that inspect the network in an automated way to create copies of the visited websites, which will then be indexed by search engines, so they can provide faster searches.
Sometimes a search engine only searches in audiovisual files stored within a computer or, as it happens in televisions, on a private server where users access through a local area network. These searchers are usually software or rich Internet applications with a very specific search options for maximum speed and efficiency when presenting the results. They are typically used for large databases and are therefore highly focused to satisfy the needs of television companies. An example of this type of software would be the Digition Suite, which apart from being a benchmark in this kind of interfaces is very close to us as for the storage and retrieval files system from the Corporació Catalana de Mitjans Audiovisuals . [ 2 ]
This particular suite and perhaps in its strongest point is that it integrates the entire process of creating, indexing, storing, searching, editing, and a recovery. Once we have a digitized audiovisual content is indexed with different techniques of different level depending on the importance of content and it's stored. The user, when he wants to retrieve a particular file, has to fill a search fields such as program title, issue date, characters who act or the name of the producer, and the robot starts the search. Once the results appear and they arranged according to preferences, the user can play the low quality videos to work as quickly as possible. When he finds the desired content, it is downloaded with good definition, it's edited and reproduced. [ 3 ]
Video search has evolved slowly through several basic search formats which exist today and all use keywords . The keywords for each search can be found in the title of the media, any text attached to the media and content linked web pages, also defined by authors and users of video hosted resources.
Some video search is performed using human powered search, others create technological systems that work automatically to detect what is in the video and match the searchers needs. Many efforts to improve video search including both human powered search as well as writing algorithm that recognize what's inside the video have meant complete redevelopment of search efforts.
It is generally acknowledged that speech to text is possible, though recently Thomas Wilde, the new CEO of Everyzing, acknowledged that Everyzing works 70% of the time when there is music, ambient noise or more than one person speaking. If newscast style speaking (one person, speaking clearly, no ambient noise) is available, that can rise to 93%. (From the Web Video Summit, San Jose, CA, June 27, 2007).
Around 40 phonemes exist in every language with about 400 in all spoken languages. Rather than applying a text search algorithm after speech-to-text processing is completed, some engines use a phonetic search algorithm to find results within the spoken word. Others work by literally listening to the entire podcast and creating a text transcription using a sophisticated speech-to-text process. Once the text file is created, the file can be searched for any number of search words and phrases.
It is generally acknowledged that visual search into video does not work well and that no company is using it publicly. Researchers at UC San Diego and Carnegie Mellon University have been working on the visual search problem for more than 15 years, and admitted at a "Future of Search" conference at UC Berkeley in spring 2007 that it was years away from being viable even in simple search.
Search that is not affected by the hosting of video, where results are agnostic no matter where the video is located:
Search results are modified, or suspect, due to the large hosted video being given preferential treatment in search results:
|
https://en.wikipedia.org/wiki/Video_search_engine
|
Video spectroscopy combines spectroscopic measurements with video technique. This technology has resulted from recent developments in hyperspectral imaging . A video capable imaging spectrometer can work like a camcorder and provide full frame spectral images in real-time that enables advanced (vehicle based) mobility and hand-held imaging spectroscopy. Unlike hyperspectral line scanners, a video spectrometer can spectrally capture randomly and quickly moving objects and processes. The product of a conventional hyperspectral line scanner has typically been called a hyperspectral data cube. A video spectrometer produces a spectral image data series at much higher speeds (1 ms) and frequencies (25 Hz) that is called a hyperspectral video. This technology can initiate novel solutions and challenges in spectral tracking, field spectroscopy, spectral mobile mapping, real-time spectral monitoring and many other applications.
This photography-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Video_spectroscopy
|
A video tape recorder ( VTR ) is a tape recorder designed to record and playback video and audio material from magnetic tape . The early VTRs were open-reel devices that record on individual reels of 2-inch-wide (5.08 cm) tape. They were used in television studios, serving as a replacement for motion picture film stock and making recording for television applications cheaper and quicker. Beginning in 1963, videotape machines made instant replay during televised sporting events possible. Improved formats, in which the tape was contained inside a videocassette , were introduced around 1969; the machines which play them are called videocassette recorders .
An agreement by Japanese manufacturers on a common standard recording format, which allowed cassettes recorded on one manufacturer's machine to play on another's, made a consumer market possible; and the first consumer videocassette recorder, which used the U-matic format, was introduced by Sony in 1971. [ 1 ]
In early 1951, Bing Crosby asked his Chief Engineer John T. (Jack) Mullin if television could be recorded on tape as was the case for audio. Mullin said that he thought that it could be done. Bing asked Ampex to build one and also set up a laboratory for Mullin in Bing Crosby Enterprises (BCE) to build one. [ 2 ] In 1951 it was believed that if the tape was run at a very high speed it could provide the necessary bandwidth to record the video signal. The problem was that a video signal has a much wider bandwidth than an audio signal does (6 MHz vs 20 kHz), requiring extremely high tape speeds to record it. However, there was another problem: the magnetic head design would not permit bandwidths over 1 megahertz to be recorded regardless of the tape speed.
The first efforts at video recording, using recorders similar to audio recorders with fixed heads, were unsuccessful. The first such demonstration of this technique was done by BCE on 11 November 1951. The result was a very poor picture. Another of the early efforts was the Vision Electronic Recording Apparatus , a high-speed multi-track machine developed by the BBC in 1952. [ 3 ] This machine used a thin steel tape on a 21-inch (53.5 cm) reel traveling at over 200 inches (510 cm) per second. Despite 10 years of research and improvements, it was never widely used due to the immense length of tape required for each minute of recorded video.
By 1952 BCE also had moved on to multi-track machines, but found limitations in recording bandwidth even at the high speeds. In 1953 BCE discovered that the magnetic head design was the problem. This problem was corrected and bandwidths exceeding the 1 megahertz limit were able to be recorded. [ 2 ] Since BCE and AMPEX were working together on the video recorder the new head design was shared with them, and AMPEX used it in their recorder. In 1955 BCE demonstrated a broadcast-quality color recorder that operated at 100 inches per second and CBS ordered three of them. Many other fixed-head recording systems were tried but all required an impractically high tape speed. It became clear that practical video recording technology depended on finding some way of recording the wide-bandwidth video signal without the high tape speed required by linear-scan machines.
In 1953 Eduard Schüller of Telefunken patented the helical scan technology. Another solution was transverse-scan technology, developed by Ampex around 1954, in which the recording heads are mounted on a spinning drum and record tracks in the transverse direction, across the tape. By recording on the full width of the tape rather than just a narrow track down the center, this technique achieved a much higher density of data per linear centimeter of tape, allowing a lower tape speed of 15 inches per second to be used. The Ampex VRX-1000 became the world's first commercially successful videotape recorder in 1956. It uses the 2″ quadruplex format, using two-inch (5.1 cm) tape. [ 4 ] Because of its US$50,000 price, the Ampex VRX-1000 could be afforded only by the television networks and the largest individual stations. [ 5 ]
By early 1957 the only successful manufacturer of videotape was 3M , the product being exceedingly difficult to manufacture to the necessary quality. The three U.S. networks officially inaugurated use of videotape on 28 April 1957, "with the changeover to daylight saving time," at which time there were "probably not more than 50 useable rolls of tape among them—it was that critical." [ 6 ]
Ampex's quadruplex magnetic tape video recording system has certain limitations, such as the lack of clean pause, or still-frame, capability, because when tape motion is stopped, only a single segment of the picture recording is present at the playback heads (only 16 lines of the picture in each segment), so it can only reproduce recognizable pictures when the tape is playing at normal speed. [ 7 ] ) But in spite of its drawbacks it remained the broadcasting studio standard until about 1980. The helical scan system overcame this limitation. [ 8 ]
In 1959 JVC demonstrated its first helical scan VTR named KV-1. [ 9 ] In 1963, Philips introduced its EL3400 1" helical scan recorder (aimed at the business and domestic user), and Sony marketed the 2" PV-100, its first open-reel VTR intended for business, medical, airline, and educational use. [ 10 ]
The Telcan, produced by the Nottingham Electronic Valve Company and demonstrated on June 24, 1963 , [ 11 ] was the first home video recorder. It could be bought as a unit or in kit form for £60. However, there were several drawbacks: it was expensive, not easy to put together, and can record for only 20 minutes at a time in black-and-white. [ 12 ] [ 13 ] [ 14 ]
The Sony model CV-2000 , first marketed in 1965, is their first VTR intended for home use and is based on half-inch tape. [ 15 ] Ampex and RCA followed in 1965 with their own open-reel monochrome VTRs priced under US $1,000 for the home consumer market. Prerecorded videos for home replay became available in 1967. [ 16 ]
The EIAJ format is a standard half-inch format used by various manufacturers. EIAJ-1 is an open-reel format. EIAJ-2 uses a cartridge that contains a supply reel, but not the take-up reel. Since the take-up reel is part of the recorder, the tape has to be fully rewound before removing the cartridge, which is a relatively slow procedure.
The development of the videocassette followed other replacements of open-reel systems with a cassette or cartridge in consumer items: the Stereo-Pak 4-track audio cartridge in 1962, the compact audio cassette and Instamatic film cartridge in 1963, the 8-track cartridge in 1965, and the Super 8 home motion picture film cartridge in 1966.
Before the invention of the video tape recorder, live video was recorded onto motion picture film stock in a process known as telerecording or kinescoping. Although the first quadruplex VTRs recorded with good quality, the recordings could not be slowed or freeze-framed , so kinescoping processes continued to be used for about a decade after the development of the first VTRs.
In the technique used in all transverse-scan video tape recorders, the recording heads are mounted in a rapidly spinning drum which is pressed against the moving tape, so the heads move across the tape in a transverse or nearly vertical path, recording the video signal in consecutive parallel tracks sideways across the tape. This allows use of the entire width of the tape, storing much more data per inch of tape, compared to the fixed head used in audio tape recording, which records a single track down the tape. The heads move across the tape at the high speed necessary to record the high-bandwidth video signal, but the tape moves at a slower speed through the machine. In addition, three ordinary tracks are recorded along the edge of the tape by stationary recording heads. For correct playback, the motion of the heads has to be precisely synchronized with the motion of the tape through the capstan, so a control track of synchronizing pulses is recorded. The other two tracks are for the audio channel and a cueing track.
The early machines use the Ampex 2-inch quadruplex system in which the drum has 4 heads and rotates at 14,400 RPM perpendicular to the tape, so the recorded tracks are transverse to the tape axis. With 2-inch tape this requires 16 tracks for a single analog NTSC video frame or 20 for a PAL frame.
The helical scan methods use a recording drum with a diagonal axis of rotation. The tape is wrapped longitudinally around the drum by idler wheels, so the tape heads, instead of moving across the tape at almost 90° to the direction of motion as in the quadruplex system, move across the tape at a shallow angle, recording a long diagonal track across the tape. This allows an entire frame to be recorded per track. This simplifies the electronics and timing systems. It also allows the recorder to be paused (freeze-framed) during playback to display a single still frame, by simply stopping the tape transport mechanism, allowing the tape heads to repeatedly pass over the same track.
This recording technique has many potential sources of timing errors. If the mechanism runs at an absolutely constant speed, and never varies from moment to moment, or from the time of recording to the time of playback, then the timing of the playback signal is exactly the same as the input. However, imperfection being inevitable, the timing of the playback always differs to some extent from the original signal. Longitudinal error (error arising from effects in the long direction of the tape) can be caused by variations in the rotational rate of the capstan drive, stretching of the tape medium, and jamming of tape in the machine. Transverse error (error arising from effects in the cross-tape direction) can be caused by variations in the rotational speed of the scanning drum and differences in the angle between the tape and the scanning heads (usually addressed by video tracking controls). Longitudinal errors are similar to the ones that cause wow and flutter in audio recordings. Since these errors are not so subtle and since it is standard video recording practice to record a parallel control track, these errors are detected and servomechanisms are adjusted accordingly to dramatically reduce this problem.
Many of the deficiencies of the open-reel systems were overcome with the invention of the videocassette recorder (VCR), where the videotape is enclosed in a user-friendly videocassette shell. This subsequently became the most familiar type of VTR known to consumers. In this system, the tape is preattached onto two reels enclosed within the cassette, and tape loading and unloading are automated. There is no need for the user to ever touch the tape, and the media can be protected from dust, dirt, and tape misalignments that can foul the recording mechanism. Typically, the only time the user ever touches the tape in a videocassette is when a failure results from a tape getting stuck in the mechanism.
Home VCRs first became available in the early 1970s, with Sony releasing its VO-1600 model in 1971 [ 17 ] and with Philips releasing the Model 1500 in England a year later. [ 18 ] The first system to be notably successful with consumers was Sony 's Betamax (or Beta) in 1975. It was soon followed by the competing VHS (Video Home System) format from JVC in 1977 [ 18 ] and later by other formats such as Video 2000 from Philips , V-Cord from Sanyo , and Great Time Machine from Quasar .
The Beta/VHS format war soon began, while the other competitors quickly disappeared. Betamax sales eventually began to dwindle, and after several years VHS emerged as the winner of the format war. In 1988, Sony began to market its own VHS machines, and despite claims that it was still backing Beta, it was clear that the format was no longer viable in most parts of the world. In parts of South America and in Japan , Betamax continued to be popular and was still in production up to the end of 2002. [ 19 ]
Later developments saw analog magnetic tapes largely replaced by digital video tape formats. Following this, much of the VTR market, in particular videocassettes and VCRs popular at the consumer level, were also replaced by non-tape media, such as DVD and later Blu-ray optical discs .
Video tape recorder technologies include:
The Buggles ' hit song " Video Killed the Radio Star ", the first video ever to air on MTV , contains the lyric "Put the blame on VTR". [ 20 ]
|
https://en.wikipedia.org/wiki/Video_tape_recorder
|
Viedma ripening or attrition-enhanced deracemization is a chiral symmetry breaking phenomenon observed in solid/liquid mixtures of enantiomorphous ( racemic conglomerate ) crystals that are subjected to comminution . It can be classified in the wider area of spontaneous symmetry breaking phenomena observed in chemistry and physics.
It was discovered in 2005 by geologist Cristobal Viedma, who used glass beads and a magnetic stirrer to enable particle breakage of a racemic mixture of enantiomorphous sodium chlorate crystals in contact with their saturated solution in water. [ 1 ] A sigmoidal ( autocatalytic ) increase in the solid-phase enantiomeric excess of the mixture was obtained, eventually leading to homochirality , i.e. the complete disappearance of one of the chiral species. Since the original discovery, Viedma ripening has been observed in a variety of intrinsically chiral organic compounds that exhibit conglomerate crystallization and are able to inter-convert in the liquid via racemization reactions. [ 2 ] It is also regarded as a potential new technique to separate enantiomers of chiral molecules in the pharmaceutical and fine chemical industries ( chiral resolution ).
The exact interplay of the mechanisms leading to deracemization in Viedma ripening is a subject of ongoing scientific debate. [ 3 ] [ 4 ] It is, however, currently believed that for intrinsically chiral molecules, deracemization occurs via a combination of various phenomena:
Two key assumptions often invoked to explain the mechanism is that: a) small fragments generated by breakage for each enantiomeric crystal population can maintain their chirality, even when they are smaller than the critical radius for nucleation (and are thus expected to dissolve) and b) small chiral fragments can undergo enantiospecific aggregation to larger particles of the same chirality. Using these two assumptions, it can be shown mathematically, [ 6 ] that any stochastic even immeasurable asymmetry of one enantiomeric crystal population over the other can be amplified to homochirality in a random manner.
In principle, molecules required for the generation of life, i.e. amino acids that combine to form proteins and sugars that form DNA molecules are all chiral and are thus able to adopt two mirror-image forms (often described as left- and right-handed), which from a chemical perspective are equally likely to exist. However, all biologically-relevant molecules known on earth are of a single handedness, even though their mirror images are also capable of forming similar molecules. The reason of the prevalence of homochirality in living organisms is currently unknown and is often connected to the origin of life itself. Whether homochirality emerged before or after life is currently unknown, but many researchers believe that homochirality could have been a result of amplification of extremely small chiral asymmetries.
Since Viedma ripening has been observed in biologically-relevant molecules, such as chiral amino acids [ 7 ] it has been put forward by some as a possible contributing mechanism for chiral amplification in a prebiotic world. [ 8 ] [ 9 ] [ 10 ]
|
https://en.wikipedia.org/wiki/Viedma_ripening
|
Vienna ÖNB 5203 [ 1 ] is a fifteenth-century astronomical multiple-text (and multi-graphical), miscellany manuscript conserved at the Austrian National Library (Österreichischen Nationalbibliothek). [ 2 ] One of the main features of this codex is that it has been largely copied by the hand of Regiomontanus , [ 3 ] a famous German mathematician, astrologer and astronomer of the fifteenth century.
Apart from Regiomontanus’ autograph, Vienna ÖNB 5203 contains examples of two other scribal hands, one of which belongs to Georg von Peurbach , who was as well an astronomer, mathematician and instrument maker of Austrian origin.
The manuscript also contains a number of Peurbach's works, including his most famous one – Theoricae Novae Planetarum , a re-elaboration of Ptolemaic astronomy theories in a more comprehensive way. Other treatises in Vienna ÖNB 5203 touch upon a great variety of subjects, including astronomy, astrology, music, mathematics and physics. [ 4 ]
Vienna ÖNB 5203 [ 5 ] was composed in between 1454 and 1462, probably at the University of Vienna, by Regiomontanus. The only parts of the manuscripts not copied by Regiomontanus are folios 67-69 and 88–92, autograph to Georg von Peurbach, and folios from 79 to 86, belonging to the third anonymous scribe. These anonymous folios contain an anonymous canon starting with “ Sinum totum ad sinum arcum ecliptice ab aliquo puncto... ”. It is not annotated, in comparison to the rest of the manuscript, but is accompanied by a few diagrams (in addition, there are a few blank spaces, suggesting that more diagrams were initially planned to be added).
After Regiomontanus’ death in 1476, his entire library, which, apart from the books, included many astronomical instruments, has passed into ownership of his collaborator from Nuremberg, Bernhard Walther (1430–1504). [ 6 ] In the early and mid-sixteenth century the library of Regiomontanus, including the manuscript Vienna ÖNB 5203, has been attracting attention of the intellectuals. One of such scholars was Johannes Schöner (1477–1547), a mathematician from the Nuremberg college who has foliated Vienna ÖNB 5203, has added titles to many treatises throughout the manuscript, as well as the list of content, and has entitled the codex “Regiomontanus’ calculation notebook”. The composition and binding of the manuscript has remained preserved ever since.
At some point of time the manuscript was also owned by Philipp Eduard Fugger (1546–1618). [ 4 ]
Regiomontanus (Johannes Müller von Königsberg) [ 7 ] was a German astrologer, astronomer and mathematician of the fifteenth century. He received his education at the University of Leipzig, followed by studies in Vienna, where the manuscript Vienna ÖNB 5203 has been most likely composed. Regiomontanus was a pupil of Georg von Peurbach, whose works are found amongst the content of the manuscript and who has copied a part of text with his own hand. Regiomontanus contributed largely to publishing the printed edition of Peurbach's Theoricae Novae Planetarum after the death of his teacher.
Apart from copying most of the Vienna ÖNB 5203, Regiomontanus has left vast marginal annotations in the manuscript. These notes are of particular interest, as they give an opportunity for tracing both the manuscript production and Regiomontanus’ apprehension of the works of his predecessors and contemporaries.
Georg von Peurbach (variants of the name: Purbach, Peuerbach, Purbachius) [ 8 ] was an Austrian astronomer, mathematician and instrument maker of the mid-fifteenth century. His most famous astronomical work Theoricae Novae Planetarum , which has later transitioned into print thanks to Regiomontanus. This work is the opening treatise of Vienna ÖNB 5203. Even though the manuscript contains Peurbach's autographs, this particular work is copied by Regiomontanus. The works copied by Peurbach include his treatise Fabrica et usus instrumenti pro veris coniunctionibus et oppositionibus Solis et Lune ( Cum animadvertissem quoddam instrumentum pro veris coniunctionibus facile reperiendis... ) on folios 67r-69r, as well as his Speculum planetarum ( Quoniam experimentum sermonum verorum est ut consonent… ) on folios 88r-92r.
Georg Peurbach's Theoricae Novae Planetarum was written in 1454 and then published by Regiomontanus in 1472. This treatise was an elaboration of the Ptolemaic astronomy, [ 9 ] attempting to replace the older treatise entitled Theorica Planetarum [ Gerardi ], attributed to Gerard of Cremona, which was one of the major theoretical works in the university astronomical syllabus. Peurbach's Theoricae Novae Planetarum aligns more with the Parisian Alfonsine tables, while the one by Gerard of Cremona is closer to the Toledan tradition. The diagrams serve as a pedagogical tool for explaining the motion of the major planets, of the Sun and the Moon, of the fixed stars (the 8th sphere) and the eclipse theory. Theoricae is situated on folios 2r-24r, precisely it is one of the three earliest manuscripts containing this work. [ 10 ] The diagrams that serve to illustrate Peurbach's theory occupy the margins (unlike the printed edition, where they are situated directly in the text [ 11 ] ); however, the layout of the page suggests that Regiomontanus has intentionally left wide margins to later fill them with technical drawings and notes.
As one of the first manuscripts containing the Theoricae , written down by a future publisher of this text, Vienna ÖNB 5203 has served as a base for a printed addition at the level of diagrams as well as the text. For instance, many of the diagrams found in BSB Clm 27 are repeating those in Vienna ÖNB 5203.
As many of the manuscripts of the Alfonsine corpus, Vienna ÖNB 5203 contains a work of one of the key figures of the Alfonsine period— John of Murs . [ 12 ] However, the choice of the work is rather original, as it touches upon the musical subject. John of Murs’ treatise Musica speculativa (also known under the title Musica speculativa secundum Boetium ) is situated on folios 129r-133r. This work was written in and is in a certain way a summary of Boethius' musical treatise for studying purposes. [ 13 ] The layout of folios 129r-133r is different from the rest of the manuscript, as Regiomontanus has left noticeably smaller margins in comparison with those observed in Theoricae Novae Planetarum . [ 10 ] However, the treatise is also accompanied by a number of marginal annotations, which is not surprising, as it has been used a lot for the university teaching, and thus was annotated in various manuscripts.
Apart from canons and diagrams, Vienna ÖNB 5203 also contains a number of astronomical tables. For instance, the canon at folios 54r-58v starting " Cum diu saepe dubitarem an tabella que Solis altitudines ad horam… " [ 14 ] and entitled Compositio tabule altitudinis Solis ad omnes horas is accompanied by tables that are integrated directly in the texts [ 1 ] (in comparison to a common tradition of separating canons and tables within the manuscript, or sometimes even within two different manuscripts).
For instance, on folio 56v we find what appears to be a table, but is in fact a computation of Toblique ascention in the tabular format. Such type of content is not very common within the manuscripts of the alfonsine corpus; in other words, this table was perhaps not intended to be used in the computation, but rather to show how to construct a table in the first place.
Folios 48r-50r contain, on the other hand, a vast table entitles Tabula radicum et numerorum cubicorum not accompanied by text. The table is incomplete and is only filled on the folios 48r-49r. Folios 49v-50r contain a lining and the column headings, however, the numbers have never been written down.
Most of the works in Vienna ÖNB 5203 are accompanied by marginal annotations. [ 4 ] Apart from being an autograph of Regiomontanus and Peurbach, these marginal notes are of particular interest for the science historians. [ 1 ] They give an opportunity for tracing both the manuscript production and Regiomontanus’ apprehension of the works of his predecessors and contemporaries. The marginal content is also diverse in its types and can be classified in relation to its form: diagrams, textual notes and calculations, and tabular marginalia .
The manuscript contains: [ 4 ]
|
https://en.wikipedia.org/wiki/Vienna,_Österreichischen_Nationalbibliothek,_MS_5203
|
The ViennaRNA Package is software , a set of standalone programs and libraries used for predicting and analysing RNA nucleic acid secondary structures . [ 1 ] The source code for the package is released as free and open-source software and compiled binaries are available for the operating systems Linux , macOS , and Windows . The original paper has been cited over 2,000 times.
The three dimensional structure of biological macromolecules like proteins and nucleic acids play a critical role in determining their functional role. [ 2 ] This process of decoding function from the sequence is an experimentally and computationally challenging question addressed widely. [ 3 ] [ 4 ] RNA structures form complex secondary and tertiary structures compared to DNA which form duplexes with full complementarity between two strands. This is partly because the extra oxygen in RNA increases the propensity for hydrogen bonding in the nucleic acid backbone. The base pairing and base stacking interactions of RNA play critical role in formation of ribosome , spliceosome , or tRNA .
Secondary structure prediction is commonly done using approaches like dynamic programming, energy minimisation (for most stable structure) and generating suboptimal structures. Many structure prediction tools have been implemented also.
The first version of the ViennaRNA Package was published by Hofacker et al. in 1994. [ 1 ] The package distributed tools to compute either minimum free energy structures or partition functions of RNA molecules; both using the idea of dynamic programming . Non-thermodynamic criterion like formation of maximum matching or various versions of kinetic folding along with an inverse folding heuristic to determine structurally neutral sequences were implemented. Additionally, the package also contained a statistics suite with routines for cluster analysis , statistical geometry, and split decomposition.
The package was made available as library and a set of standalone routines.
A number of major systemic changes were introduced in this version with the use of a new parametrized energy model ( Turner 2004 ), [ 5 ] restructuring of the RNAlib to support concurrent computations in thread-safe manner, improvements to the application programming interface ( API ), and inclusion of several new auxiliary tools. For example, tools to assess RNA-RNA interactions and restricted ensembles of structures. Further, other features included additional output information such as centroid structures and maximum expected accuracy structures derived from base pairing probabilities, or z-scores for locally stable secondary structures, and support for input in FASTA format. The updates, however, are compatible with earlier versions without affecting the computational efficiency of the core algorithms. [ 6 ]
The tools provided by the ViennaRNA Package are also available for public use through a web interface. [ 7 ] [ 8 ]
In addition to prediction and analysis tools, the ViennaRNA Package contains several scripts and utilities for plotting and input-output processing. A summary of the available programs is collected in the table below (an exhaustive list with examples can be found in the official documentation). [ 9 ]
|
https://en.wikipedia.org/wiki/ViennaRNA_Package
|
The Vienna Ab initio Simulation Package , better known as VASP , is a package written primarily in Fortran for performing ab initio quantum mechanical calculations using either Vanderbilt pseudopotentials , or the projector augmented wave method , and a plane wave basis set . [ 2 ] The basic methodology is density functional theory (DFT), but the code also allows use of post-DFT corrections such as hybrid functionals mixing DFT and Hartree–Fock exchange (e.g. HSE, [ 3 ] PBE0 [ 4 ] or B3LYP [ 5 ] ), many-body perturbation theory (the GW method [ 6 ] ) and dynamical electronic correlations within the random phase approximation (RPA) [ 7 ] and MP2 . [ 8 ] [ 9 ]
Originally, VASP was based on code written by Mike Payne (then at MIT ), which was also the basis of CASTEP . [ 10 ] It was then brought to the University of Vienna , Austria, in July 1989 by Jürgen Hafner . The main program was written by Jürgen Furthmüller , who joined the group at the Institut für Materialphysik in January 1993, and Georg Kresse. An early version of VASP was called VAMP. [ 11 ] VASP is currently being developed by Georg Kresse ; recent additions include the extension of methods frequently used in molecular quantum chemistry to periodic systems.
VASP is currently used by more than 1400 research groups in academia and industry worldwide on the basis of software licence agreements with the University of Vienna. Because VASP can be used for a wide range of applications such as phonon calculations and structure calculations, it is widely employed in the fields of condensed matter physics, materials science, and quantum chemistry.
Recent version history: VASP.6.4.1 on 7 April 2023, VASP.6.4.2 on 20 July 2023, VASP.6.4.3 on 19 March 2024 and VASP.6.5.0 on 17 December 2024.
This simulation software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vienna_Ab_initio_Simulation_Package
|
Vienna Standard Mean Ocean Water ( VSMOW ) is an isotopic standard for water, that is, a particular sample of water whose proportions of different isotopes of hydrogen and oxygen are accurately known. VSMOW is distilled from ocean water and does not contain salt or other impurities. Published and distributed by the Vienna -based International Atomic Energy Agency in 1968, the standard and its essentially identical successor, VSMOW2, continue to be used as a reference material .
Water samples made up of different isotopes of hydrogen and oxygen have slightly different physical properties. As an extreme example, heavy water , which contains two deuterium ( 2 H) atoms instead of the usual, lighter hydrogen-1 ( 1 H), has a melting point of 3.82 °C (38.88 °F) and boiling point of 101.4 °C (214.5 °F). [ 1 ] Different rates of evaporation cause water samples from different places in the water cycle to contain slightly different ratios of isotopes. Ocean water (richer in heavy isotopes) and rain water (poorer in heavy isotopes) roughly represent the two extremes found on Earth. With VSMOW, the IAEA simultaneously published an analogous standard for rain water, Standard Light Antarctic Precipitation (SLAP), and eventually its successor SLAP2. SLAP contains about 5% less oxygen-18 and 42.8% less deuterium than VSMOW.
A scale based on VSMOW and SLAP is used to report oxygen-18 and deuterium concentrations. From 2005 until its redefinition in 2019 , the kelvin was specified to be 1/273.16 of the temperature of specifically VSMOW at its triple point .
Abundances of a particular isotope in a substance are usually given relative to some reference material, as a delta in parts per thousand ( ‰ ) from the reference. For example, the ratio of deuterium ( 2 H) to hydrogen-1 in a substance x may be given as
where ( 2 H / 1 H ) x {\displaystyle (^{2}\mathrm {H} /^{1}\mathrm {H} )_{x}} denotes the absolute concentration in x . [ 2 ]
In 1961, pursuing a standard for measuring and reporting deuterium and oxygen-18 concentrations, Harmon Craig of the Scripps Institution of Oceanography in San Diego , California, proposed an abstract water standard. He based the proportions on his measurements of samples taken by Epstein & Mayeda (1953) of ocean waters around the world. [ 3 ] Approximating an average of their measurements, Craig defined his "standard mean ocean water" (SMOW) relative to a water sample held in the United States' National Bureau of Standards called NBS-1 (sampled from the Potomac River [ 4 ] ). In particular, SMOW had the following parameters relative to NBS-1:
Later, researchers at the California Institute of Technology defined another abstract reference, also called "SMOW", for oxygen-18 concentrations, such that a sample of Potsdam Sandstone in their possession satisfied δ 18 O sandstone/SMOW = 15.5‰ . [ 5 ]
To resolve the confusion, November 1966 meeting of the Vienna-based International Atomic Energy Agency (IAEA) recommended the preparation of two water isotopic standards: Vienna SMOW (VSMOW; initially just "SMOW" but later disambiguated [ 5 ] ) and Standard Light Antarctic Precipitation (SLAP). [ 6 ] Craig prepared VSMOW by mixing distilled Pacific Ocean water with small amounts of other waters. VSMOW was intended to match the SMOW standard as closely as possible. Craig's measurements found an identical 18 O concentration and a 0.2‰ lower 2 H concentration. [ 7 ] The SLAP standard was created from a melted firn sample from Plateau Station in Antarctica. [ 7 ] A standard with oxygen-18 and deuterium concentrations between that of VSMOW and SLAP, called Greenland Ice Sheet Precipitation (GISP), was also prepared. [ 7 ] The IAEA began distributing samples in 1968, and Gonfiantini (1978) compiled analyses of VSMOW and SLAP from 45 laboratories around the world. [ 8 ] The VSMOW sample was stored in a stainless-steel container under nitrogen and was transferred to glass ampoules in 1977. [ 7 ]
The deuterium and oxygen-18 concentrations in VSMOW are close to the upper end of naturally occurring materials, and the concentrations in SLAP are close to the lower end. [ 2 ] Due to confusion over multiple water standards, the Commission on Isotopic Abundances and Atomic Weights recommended in 1994 that all future isotopic measurements of oxygen-18 ( 18 O) and deuterium ( 2 H) be reported relative to VSMOW, on a scale such that the δ 18 O of SLAP is −55.5‰ and the δ 2 H of SLAP is −428‰, relative to VSMOW. [ 9 ] [ 10 ] Therefore, SLAP is defined to contain 94.45% the oxygen-18 concentration and 57.2% the deuterium concentration of VSMOW. [ 9 ] Using a scale with two defined samples improves comparison of results between laboratories.
In December 1996, because of a dwindling supply of VSMOW, the IAEA decided to create a replacement standard, VSMOW2. Published in 1999, it contains a nearly identical isotopic mixture. About 300 liters was prepared from a mixture of distilled waters, from Lake Bracciano in Italy, the Sea of Galilee in Israel, and a well in Egypt, in proportions chosen to reach VSMOW isotopic ratios. The IAEA also published a successor to SLAP, called SLAP2, derived from melted water from four Antarctic drilling sites. [ 11 ] Deviations of 17 O, and 18 O in the new standards from the old standards are zero within the error of measurement. [ 12 ] There is a small but measurable deviation of 2 H concentration in SLAP2 from SLAP— δ 2 H SLAP2/VSMOW is defined to be −427.5‰ instead of −428‰—but not in VSMOW2 from VSMOW. [ 13 ] The IAEA recommends that measurements still be reported on the VSMOW–SLAP scale. [ 14 ]
The older two standards are now kept at the IAEA and no longer sold. [ 15 ]
All measurements are reported with their standard uncertainty . Measurements of particular combinations of oxygen and hydrogen isotopes are unnecessary because water molecules constantly exchange atoms with each other.
Except for tritium, which was determined by the helium gas emitted by radioactive decay, these measurements were taken using mass spectroscopy .
Based on the results of Gonfiantini (1978) , the IAEA defined the delta scale with SLAP at −55.5 ‰ for 18 O and −428‰ for 2 H. That is, SLAP was measured to contain approximately 5.55% less oxygen-18 and 42.8% less deuterium than does VSMOW, and these figures were used to anchor the scale at two points. [ 8 ] Experimental figures are given below.
The concentrations of 17 O, and 18 O are indistinguishable between VSMOW and VSMOW2, and between SLAP and SLAP2. The specification sheet gives the standard errors in these measurements. [ 20 ] The concentration of 2 H is unchanged in VSMOW2 as well, but is slightly increased in SLAP2. The IAEA reports:
On 6 July 2007, the tritium concentration was 3.5 ± 1.0 TU in VSMOW2, and 27.6 ± 1.6 TU in SLAP2. [ 22 ]
The VSMOW–SLAP scale is recommended by the USGS, IUPAC, and IAEA for measurement of deuterium and 18 O concentrations in any substance. [ 24 ] [ 25 ] [ 9 ] For 18 O, a scale based on Vienna Pee Dee Belemnite can also be used. [ 9 ] The physical samples, which are distributed by the IAEA and U.S. National Institute of Standards and Technology , are used to calibrate isotope-measuring equipment. [ 26 ]
Variations in isotopic content are useful in hydrology, meteorology, and oceanography. [ 27 ] Different parts of the ocean do have slightly different isotopic concentrations: δ 18 O values range from –11.35‰ in water off the coast of Greenland to +1.32‰ in the north Atlantic, and δ 2 H concentrations in deep ocean water range from roughly –1.7‰ near Antarctica to +2.2‰ in the Arctic. Variations are much larger in surface water than in deep water. [ 28 ]
In 1954, the International Committee for Weights and Measures (CIPM) established the definition of the Kelvin as 1/273.16 of the absolute temperature of the triple point of water. Waters with different isotopic compositions had slightly different triple points. Thus, the International Committee for Weights and Measures specified in 2005 [ 29 ] that the definition of the kelvin temperature scale would refer to water with a composition of the nominal specification of VSMOW. [ 30 ] The decision was welcomed in 2007 by Resolution 10 of the 23rd CGPM. [ 31 ] The triple point is measured in triple-point cells, where the water is held at its triple point and allowed to reach equilibrium with its surroundings. Using ordinary waters, the range of inter-laboratory measurements of the triple point can be about 250 μK . [ 32 ] With VSMOW, the inter-laboratory range of measurements of the triple point is about 50 μK . [ 33 ]
After the 2019 revision of the SI , the kelvin is defined in terms of the Boltzmann constant , which makes its definition completely independent of the properties of water. The defined value for the Boltzmann constant was selected so that the measured value of the VSMOW triple point is identical to the prior defined value, within measurable accuracy. [ 34 ] Triple-point cells remain a practical method of calibrating thermometers. [ 33 ]
|
https://en.wikipedia.org/wiki/Vienna_Standard_Mean_Ocean_Water
|
In mathematics , Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots . They are named after François Viète (1540-1603), more commonly referred to by the Latinised form of his name, "Franciscus Vieta."
Any general polynomial of degree n P ( x ) = a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 {\displaystyle P(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{1}x+a_{0}} (with the coefficients being real or complex numbers and a n ≠ 0 ) has n (not necessarily distinct) complex roots r 1 , r 2 , ..., r n by the fundamental theorem of algebra . Vieta's formulas relate the polynomial coefficients to signed sums of products of the roots r 1 , r 2 , ..., r n as follows:
Vieta's formulas can equivalently be written as ∑ 1 ≤ i 1 < i 2 < ⋯ < i k ≤ n ( ∏ j = 1 k r i j ) = ( − 1 ) k a n − k a n {\displaystyle \sum _{1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n}\left(\prod _{j=1}^{k}r_{i_{j}}\right)=(-1)^{k}{\frac {a_{n-k}}{a_{n}}}}
for k = 1, 2, ..., n (the indices i k are sorted in increasing order to ensure each product of k roots is used exactly once).
The left-hand sides of Vieta's formulas are the elementary symmetric polynomials of the roots.
Vieta's system (*) can be solved by Newton's method through an explicit simple iterative formula, the Durand-Kerner method .
Vieta's formulas are frequently used with polynomials with coefficients in any integral domain R . Then, the quotients a i / a n {\displaystyle a_{i}/a_{n}} belong to the field of fractions of R (and possibly are in R itself if a n {\displaystyle a_{n}} happens to be invertible in R ) and the roots r i {\displaystyle r_{i}} are taken in an algebraically closed extension. Typically, R is the ring of the integers , the field of fractions is the field of the rational numbers and the algebraically closed field is the field of the complex numbers .
Vieta's formulas are then useful because they provide relations between the roots without having to compute them.
For polynomials over a commutative ring that is not an integral domain, Vieta's formulas are only valid when a n {\displaystyle a_{n}} is not a zero-divisor and P ( x ) {\displaystyle P(x)} factors as a n ( x − r 1 ) ( x − r 2 ) … ( x − r n ) {\displaystyle a_{n}(x-r_{1})(x-r_{2})\dots (x-r_{n})} . For example, in the ring of the integers modulo 8, the quadratic polynomial P ( x ) = x 2 − 1 {\displaystyle P(x)=x^{2}-1} has four roots: 1, 3, 5, and 7. Vieta's formulas are not true if, say, r 1 = 1 {\displaystyle r_{1}=1} and r 2 = 3 {\displaystyle r_{2}=3} , because P ( x ) ≠ ( x − 1 ) ( x − 3 ) {\displaystyle P(x)\neq (x-1)(x-3)} . However, P ( x ) {\displaystyle P(x)} does factor as ( x − 1 ) ( x − 7 ) {\displaystyle (x-1)(x-7)} and also as ( x − 3 ) ( x − 5 ) {\displaystyle (x-3)(x-5)} , and Vieta's formulas hold if we set either r 1 = 1 {\displaystyle r_{1}=1} and r 2 = 7 {\displaystyle r_{2}=7} or r 1 = 3 {\displaystyle r_{1}=3} and r 2 = 5 {\displaystyle r_{2}=5} .
Vieta's formulas applied to quadratic and cubic polynomials:
The roots r 1 , r 2 {\displaystyle r_{1},r_{2}} of the quadratic polynomial P ( x ) = a x 2 + b x + c {\displaystyle P(x)=ax^{2}+bx+c} satisfy r 1 + r 2 = − b a , r 1 r 2 = c a . {\displaystyle r_{1}+r_{2}=-{\frac {b}{a}},\quad r_{1}r_{2}={\frac {c}{a}}.}
The first of these equations can be used to find the minimum (or maximum) of P ; see Quadratic equation § Vieta's formulas .
The roots r 1 , r 2 , r 3 {\displaystyle r_{1},r_{2},r_{3}} of the cubic polynomial P ( x ) = a x 3 + b x 2 + c x + d {\displaystyle P(x)=ax^{3}+bx^{2}+cx+d} satisfy r 1 + r 2 + r 3 = − b a , r 1 r 2 + r 1 r 3 + r 2 r 3 = c a , r 1 r 2 r 3 = − d a . {\displaystyle r_{1}+r_{2}+r_{3}=-{\frac {b}{a}},\quad r_{1}r_{2}+r_{1}r_{3}+r_{2}r_{3}={\frac {c}{a}},\quad r_{1}r_{2}r_{3}=-{\frac {d}{a}}.}
Vieta's formulas can be proved by considering the equality a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 = a n ( x − r 1 ) ( x − r 2 ) ⋯ ( x − r n ) {\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{1}x+a_{0}=a_{n}(x-r_{1})(x-r_{2})\cdots (x-r_{n})} (which is true since r 1 , r 2 , … , r n {\displaystyle r_{1},r_{2},\dots ,r_{n}} are all the roots of this polynomial), expanding the products in the right-hand side, and equating the coefficients of each power of x {\displaystyle x} between the two members of the equation.
Formally, if one expands ( x − r 1 ) ( x − r 2 ) ⋯ ( x − r n ) {\displaystyle (x-r_{1})(x-r_{2})\cdots (x-r_{n})} and regroup the terms by their degree in x {\displaystyle x} , one gets
where the inner sum is exactly the k {\displaystyle k} th elementary symmetric function
As an example, consider the quadratic f ( x ) = a 2 x 2 + a 1 x + a 0 = a 2 ( x − r 1 ) ( x − r 2 ) = a 2 ( x 2 − x ( r 1 + r 2 ) + r 1 r 2 ) . {\displaystyle f(x)=a_{2}x^{2}+a_{1}x+a_{0}=a_{2}(x-r_{1})(x-r_{2})=a_{2}(x^{2}-x(r_{1}+r_{2})+r_{1}r_{2}).}
Comparing identical powers of x {\displaystyle x} , we find a 2 = a 2 {\displaystyle a_{2}=a_{2}} , a 1 = − a 2 ( r 1 + r 2 ) {\displaystyle a_{1}=-a_{2}(r_{1}+r_{2})} and a 0 = a 2 ( r 1 r 2 ) {\displaystyle a_{0}=a_{2}(r_{1}r_{2})} , with which we can for example identify r 1 + r 2 = − a 1 / a 2 {\displaystyle r_{1}+r_{2}=-a_{1}/a_{2}} and r 1 r 2 = a 0 / a 2 {\displaystyle r_{1}r_{2}=a_{0}/a_{2}} , which are Vieta's formula's for n = 2 {\displaystyle n=2} .
Vieta's formulas can also be proven by induction as shown below.
Inductive hypothesis:
Let P ( x ) {\displaystyle {P(x)}} be polynomial of degree n {\displaystyle n} , with complex roots r 1 , r 2 , … , r n {\displaystyle {r_{1}},{r_{2}},{\dots },{r_{n}}} and complex coefficients a 0 , a 1 , … , a n {\displaystyle a_{0},a_{1},\dots ,a_{n}} where a n ≠ 0 {\displaystyle {a_{n}}\neq 0} . Then the inductive hypothesis is that P ( x ) = a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 = a n x n − a n ( r 1 + r 2 + ⋯ + r n ) x n − 1 + ⋯ + ( − 1 ) n ( a n ) ( r 1 r 2 ⋯ r n ) {\displaystyle {P(x)}={a_{n}}{x^{n}}+{{a_{n-1}}{x^{n-1}}}+{\cdots }+{{a_{1}}{x}}+{{a}_{0}}={{a_{n}}{x^{n}}}-{a_{n}}{({r_{1}}+{r_{2}}+{\cdots }+{r_{n}}){x^{n-1}}}+{\cdots }+{{(-1)^{n}}{(a_{n})}{({r_{1}}{r_{2}}{\cdots }{r_{n}})}}}
Base case, n = 2 {\displaystyle n=2} (quadratic):
Let a 2 , a 1 {\displaystyle {a_{2}},{a_{1}}} be coefficients of the quadratic and a 0 {\displaystyle a_{0}} be the constant term. Similarly, let r 1 , r 2 {\displaystyle {r_{1}},{r_{2}}} be the roots of the quadratic: a 2 x 2 + a 1 x + a 0 = a 2 ( x − r 1 ) ( x − r 2 ) {\displaystyle {a_{2}x^{2}}+{a_{1}x}+a_{0}={a_{2}}{(x-r_{1})(x-r_{2})}} Expand the right side using distributive property : a 2 x 2 + a 1 x + a 0 = a 2 ( x 2 − r 1 x − r 2 x + r 1 r 2 ) {\displaystyle {a_{2}x^{2}}+{a_{1}x}+a_{0}={a_{2}}{({x^{2}}-{r_{1}x}-{r_{2}x}+{r_{1}}{r_{2}})}} Collect like terms : a 2 x 2 + a 1 x + a 0 = a 2 ( x 2 − ( r 1 + r 2 ) x + r 1 r 2 ) {\displaystyle {a_{2}x^{2}}+{a_{1}x}+a_{0}={a_{2}}{({x^{2}}-{({r_{1}}+{r_{2}}){x}}+{r_{1}}{r_{2}})}} Apply distributive property again: a 2 x 2 + a 1 x + a 0 = a 2 x 2 − a 2 ( r 1 + r 2 ) x + a 2 ( r 1 r 2 ) {\displaystyle {a_{2}x^{2}}+{a_{1}x}+a_{0}={{a_{2}}{x^{2}}-{{a_{2}}({r_{1}}+{r_{2}}){x}}+{a_{2}}{({r_{1}}{r_{2}})}}} The inductive hypothesis has now been proven true for n = 2 {\displaystyle n=2} .
Induction step:
Assuming the inductive hypothesis holds true for all n ⩾ 2 {\displaystyle n\geqslant 2} , it must be true for all n + 1 {\displaystyle n+1} . P ( x ) = a n + 1 x n + 1 + a n x n + ⋯ + a 1 x + a 0 {\displaystyle {P(x)}={a_{n+1}}{x^{n+1}}+{{a_{n}}{x^{n}}}+{\cdots }+{{a_{1}}{x}}+{{a}_{0}}} By the factor theorem , ( x − r n + 1 ) {\displaystyle {(x-r_{n+1})}} can be factored out of P ( x ) {\displaystyle P(x)} leaving a 0 remainder. Note that the roots of the polynomial in the square brackets are r 1 , r 2 , ⋯ , r n {\displaystyle r_{1},r_{2},\cdots ,r_{n}} : P ( x ) = ( x − r n + 1 ) [ a n + 1 x n + 1 + a n x n + ⋯ + a 1 x + a 0 x − r n + 1 ] {\displaystyle {P(x)}={(x-r_{n+1})}{[{\frac {{a_{n+1}}{x^{n+1}}+{{a_{n}}{x^{n}}}+{\cdots }+{{a_{1}}{x}}+{{a}_{0}}}{x-r_{n+1}}}]}} Factor out a n + 1 {\displaystyle a_{n+1}} , the leading coefficient P ( x ) {\displaystyle P(x)} , from the polynomial in the square brackets: P ( x ) = ( a n + 1 ) ( x − r n + 1 ) [ x n + 1 + a n x n ( a n + 1 ) + ⋯ + a 1 ( a n + 1 ) x + a 0 ( a n + 1 ) x − r n + 1 ] {\displaystyle {P(x)}={(a_{n+{1}})}{(x-r_{n+1})}{[{\frac {{x^{n+1}}+{\frac {{a_{n}}{x^{n}}}{(a_{n+{1}})}}+{\cdots }+{{\frac {a_{1}}{(a_{n+{1}})}}{x}}+{\frac {a_{0}}{(a_{n+{1}})}}}{x-r_{n+1}}}]}} For simplicity sake, allow the coefficients and constant of polynomial be denoted as ζ {\displaystyle \zeta } : P ( x ) = ( a n + 1 ) ( x − r n + 1 ) [ x n + ζ n − 1 x n − 1 + ⋯ + ζ 0 ] {\displaystyle P(x)={(a_{n+1})}{(x-r_{n+1})}{[{x^{n}}+{\zeta _{n-1}x^{n-1}}+{\cdots }+{\zeta _{0}}]}} Using the inductive hypothesis, the polynomial in the square brackets can be rewritten as: P ( x ) = ( a n + 1 ) ( x − r n + 1 ) [ x n − ( r 1 + r 2 + ⋯ + r n ) x n − 1 + ⋯ + ( − 1 ) n ( r 1 r 2 ⋯ r n ) ] {\displaystyle P(x)={(a_{n+1})}{(x-r_{n+1})}{[{x^{n}}-{({r_{1}}+{r_{2}}+{\cdots }+{r_{n}}){x^{n-1}}}+{\cdots }+{{(-1)^{n}}{({r_{1}}{r_{2}}{\cdots }{r_{n}})}}]}} Using distributive property: P ( x ) = ( a n + 1 ) ( x [ x n − ( r 1 + r 2 + ⋯ + r n ) x n − 1 + ⋯ + ( − 1 ) n ( r 1 r 2 ⋯ r n ) ] − r n + 1 [ x n − ( r 1 + r 2 + ⋯ + r n ) x n − 1 + ⋯ + ( − 1 ) n ( r 1 r 2 ⋯ r n ) ] ) {\displaystyle P(x)={(a_{n+1})}{({x}{[{x^{n}}-{({r_{1}}+{r_{2}}+{\cdots }+{r_{n}}){x^{n-1}}}+{\cdots }+{{(-1)^{n}}{({r_{1}}{r_{2}}{\cdots }{r_{n}})}}]}{-r_{n+1}}{[{x^{n}}-{({r_{1}}+{r_{2}}+{\cdots }+{r_{n}}){x^{n-1}}}+{\cdots }+{{(-1)^{n}}{({r_{1}}{r_{2}}{\cdots }{r_{n}})}}]})}} After expanding and collecting like terms: P ( x ) = a n + 1 x n + 1 − a n + 1 ( r 1 + r 2 + ⋯ + r n + r n + 1 ) x n + ⋯ + ( − 1 ) n + 1 ( r 1 r 2 ⋯ r n r n + 1 ) {\displaystyle {\begin{aligned}{P(x)}={{a_{n+1}}{x^{n+1}}}-{a_{n+1}}{({r_{1}}+{r_{2}}+{\cdots }+{r_{n}}+{r_{n+1}}){x^{n}}}+{\cdots }+{{(-1)^{n+1}}{({r_{1}}{r_{2}}{\cdots }{r_{n}}{r_{n+1}})}}\\\end{aligned}}} The inductive hypothesis holds true for n + 1 {\displaystyle n+1} , therefore it must be true ∀ n ∈ N {\displaystyle \forall n\in \mathbb {N} }
Conclusion: a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 = a n x n − a n ( r 1 + r 2 + ⋯ + r n ) x n − 1 + ⋯ + ( − 1 ) n ( r 1 r 2 ⋯ r n ) {\displaystyle {a_{n}}{x^{n}}+{{a_{n-1}}{x^{n-1}}}+{\cdots }+{{a_{1}}{x}}+{{a}_{0}}={{a_{n}}{x^{n}}}-{a_{n}}{({r_{1}}+{r_{2}}+{\cdots }+{r_{n}}){x^{n-1}}}+{\cdots }+{{(-1)^{n}}{({r_{1}}{r_{2}}{\cdots }{r_{n}})}}} By dividing both sides by a n {\displaystyle a_{n}} , it proves the Vieta's formulas true.
A method similar to Vieta's formula can be found in the work of the 12th century Islamic mathematician Sharaf al-Din al-Tusi . It is plausible that algebraic advancements made by other Islamic mathematician such as Omar Khayyam , al-tusi , and al-Kashi influenced 16th-century algebraists, with Vieta being the most prominent among them. [ 1 ] [ 2 ]
The formulas were derived by the 16th-century French mathematician François Viète , for the case of positive roots.
In the opinion of the 18th-century British mathematician Charles Hutton , as quoted by Funkhouser, [ 3 ] the general principle (not restricted to positive real roots) was first understood by the 17th-century French mathematician Albert Girard :
...[Girard was] the first person who understood the general doctrine of the formation of the coefficients of the powers from the sum of the roots and their products. He was the first who discovered the rules for summing the powers of the roots of any equation.
|
https://en.wikipedia.org/wiki/Vieta's_formulas
|
In number theory , Vieta jumping , also known as root flipping , is a proof technique . It is most often used for problems in which a relation between two integers is given, along with a statement to prove about its solutions. In particular, it can be used to produce new solutions of a quadratic Diophantine equation from known ones. There exist multiple variations of Vieta jumping, all of which involve the common theme of infinite descent by finding new solutions to an equation using Vieta's formulas .
Vieta jumping is a classical method in the theory of quadratic Diophantine equations and binary quadratic forms . For example, it was used in the analysis of the Markov equation back in 1879 and in the 1953 paper of Mills. [ 1 ]
In 1988, the method came to the attention to mathematical olympiad problems in the light of the first olympiad problem to use it in a solution that was proposed for the International Mathematics Olympiad and assumed to be the most difficult problem on the contest: [ 2 ] [ 3 ]
Arthur Engel wrote the following about the problem's difficulty:
Nobody of the six members of the Australian problem committee could solve it. Two of the members were husband and wife George and Esther Szekeres , both famous problem solvers and problem creators. Since it was a number theoretic problem it was sent to the four most renowned Australian number theorists. They were asked to work on it for six hours. None of them could solve it in this time. The problem committee submitted it to the jury of the XXIX IMO marked with a double asterisk, which meant a superhard problem, possibly too hard to pose. After a long discussion, the jury finally had the courage to choose it as the last problem of the competition. Eleven students gave perfect solutions.
Among the eleven students receiving the maximum score for solving this problem were Ngô Bảo Châu , Ravi Vakil , Zvezdelina Stankova , and Nicușor Dan . [ 5 ] Emanouil Atanassov (from Bulgaria) solved the problem in a paragraph and received a special prize. [ 6 ]
The concept of standard Vieta jumping is a proof by contradiction , and consists of the following four steps: [ 7 ]
Problem #6 at IMO 1988 : Let a and b be positive integers such that ab + 1 divides a 2 + b 2 . Prove that a 2 + b 2 / ab + 1 is a perfect square . [ 8 ] [ 9 ]
The method of constant descent Vieta jumping is used when we wish to prove a statement regarding a constant k having something to do with the relation between a and b . Unlike standard Vieta jumping, constant descent is not a proof by contradiction, and it consists of the following four steps: [ 10 ]
Let a and b be positive integers such that ab divides a 2 + b 2 + 1 . Prove that 3 ab = a 2 + b 2 + 1 . [ 11 ]
Vieta jumping can be described in terms of lattice points on hyperbolas in the first quadrant. [ 2 ] The same process of finding smaller roots is used instead to find lower lattice points on a hyperbola while remaining in the first quadrant. The procedure is as follows:
This method can be applied to problem #6 at IMO 1988 : Let a and b be positive integers such that ab + 1 divides a 2 + b 2 . Prove that a 2 + b 2 / ab + 1 is a perfect square.
|
https://en.wikipedia.org/wiki/Vieta_jumping
|
The Vietnamese language is written with a Latin script with diacritics ( accent tones ) which requires several accommodations when typing on phone or computers. Software-based systems are a form of writing Vietnamese on phones or computers with software that can be installed on the device or from third-party software such as UniKey . Telex is the oldest input method devised to encode the Vietnamese language with its tones. Other input methods may also include VNI (Number key-based keyboard) and VIQR . VNI input method is not to be confused with VNI code page.
Historically, Vietnamese was also written in chữ Nôm , which is mainly used for ceremonial and traditional purposes in recent times, and remains in the field of historians and philologists . There have been attempts to type chữ Hán and chữ Nôm with existing Vietnamese input methods, but they are not widespread. [ 1 ] [ 2 ] Sometimes, Vietnamese can be typed without tone marks, which Vietnamese speakers can usually guess depending on context.
There are as many as 46 character encodings for representing the Vietnamese alphabet . [ 3 ] Unicode has become the most popular form for many of the world's writing systems, due to its great compatibility and software support. Diacritics may be encoded either as combining characters or as precomposed characters , which are scattered throughout the Latin-1 Supplement , Latin Extended-A , Latin Extended-B , and Latin Extended Additional blocks. The Vietnamese đồng symbol is encoded in the Currency Symbols block.
Unicode's coverage of Vietnamese has been subject to several changes since the 1990s. Early versions of Unicode encoded dấu huyền and dấu sắc as U+0340 ◌̀ COMBINING GRAVE TONE MARK and U+0341 ◌́ COMBINING ACUTE TONE MARK , respectively. In 2001, these two characters were deprecated as duplicate encodings of U+0300 ◌̀ COMBINING GRAVE ACCENT and U+0301 ◌́ COMBINING ACUTE ACCENT ; [ 4 ] this change was incorporated into Unicode 3.2, released in 2002. [ 5 ] With the 2009 release of Unicode 5.2, U+0340 ◌̀ and U+0341 ◌́ were undeprecated but discouraged. [ 6 ] [ 7 ] Historically, the Vietnamese language used other characters beyond the modern alphabet. The Middle Vietnamese letter B with flourish (ꞗ) is included in the Latin Extended-D block. The apex is not separately encoded in Unicode, because it derives from the Portuguese tilde , whereas dấu ngã , which derives from the Greek perispomeni , has always been misencoded as a tilde. As a workaround, U+1DC3 ◌᷃ COMBINING SUSPENSION MARK represents the apex on Wikisource and Wiktionary .
For systems that lack support for Unicode, dozens of 8-bit Vietnamese code pages have been designed. [ 3 ] The most commonly used of them were VISCII , VSCII (TCVN 5712:1993), VNI , VPS and Windows-1258 . [ 8 ] [ 9 ] Where ASCII is required, such as when ensuring readability in plain text e-mail, Vietnamese letters are often encoded according to Vietnamese Quoted-Readable (VIQR) or VSCII Mnemonic (VSCII-MNEM), [ 10 ] though usage of either variable-width scheme has declined dramatically following the adoption of Unicode on the World Wide Web . For instance, support for all above mentioned 8-bit encodings, with the exception of Windows-1258, was dropped from Mozilla software in 2014. [ 11 ]
Many Vietnamese fonts intended for desktop publishing are encoded in VNI or TCVN3 ( VSCII ). [ 9 ] Such fonts are known as "ABC fonts". [ 12 ] Popular web browsers lack support for specialty Vietnamese encodings, so any webpage that uses these fonts appears as unintelligible mojibake on systems without them installed.
Vietnamese often stacks diacritics, so typeface designers must take care to prevent stacked diacritics from colliding with adjacent letters or lines. When a tone mark is used together with another diacritic, offsetting the tone mark to the right preserves consistency and avoids slowing down saccades . [ 13 ] In advertising signage and in cursive handwriting, diacritics often take forms unfamiliar to other Latin alphabets. For example, the lowercase letter I retains its tittle in ì , ỉ , ĩ , and í . [ 14 ] These nuances are rarely accounted for in computing environments.
Vietnamese writing requires 134 additional letters (between both cases) besides the 52 already present in ASCII. [ 15 ] This exceeds the 128 additional characters available in a conventional extended ASCII encoding. Although this can be solved by using a variable-width encoding (as is done by UTF-8 ), a number of approaches have been used by other encodings to support Vietnamese without doing so:
The following table provides Unicode code points for all non-ASCII Vietnamese letters.
Many fonts support a subset of the Latin writing system that omits much of the Vietnamese alphabet. Due to the high density of Vietnamese-specific characters in Vietnamese text, Web browsers that implement font substitution reliably produce a ransom note effect when the webpage specifies an inadequate font.
Unicode includes over 10,000 Nôm characters as part of Unicode's repertoire of CJK Unified Ideographs . Of these characters, 10,082 can be found in the CJK Unified Ideographs Extension B block, while the rest are distributed between the CJK Unified Ideographs , CJK Unified Ideographs Extension A , and CJK Unified Ideographs Extension C blocks. A further 1,028 characters, including over 400 characters specific to the Tày language , are encoded in the CJK Unified Ideographs Extension E block. The characters are taken from the Vietnamese standards TCVN 5773:1993 and TCVN 6909:2001 [error for TCVN 6056:1995?], as well as from research by the Han-Nom Research Institute and other groups. [ 18 ] All the characters in TCVN 5773:1993 and about 95% of the characters in TCVN 6909:2001 [error for TCVN 6056:1995?] have corresponding codepoints in Unicode 5.1, though TCVN 5773:1993 itself mapped most of its characters to the Private Use Area of Unicode. [ 19 ] Unicode 13.0 added two diacritical characters to the Ideographic Symbols and Punctuation block that were commonly used to indicate borrowed characters in chữ Nôm . [ 20 ] [ 21 ]
The two most comprehensive Nôm fonts are the Vietnamese Nôm Preservation Foundation 's Nôm Na Tống Light [ 22 ] and the community-developed HAN NOM A / HAN NOM B , [ 23 ] both of which place a large number of unstandardized characters in the Private Use Areas .
The Unicode Consortium's Unihan database includes Vietnamese readings of some characters but does not distinguish between Sino-Vietnamese and Nôm readings.
Like other CJKV writing systems , chữ Nôm is traditionally written vertically , from top to bottom and right to left.
Chữ Hán and chữ Nôm may also be annotated using ruby characters , which is the same as chữ Quốc Ngữ for Vietnamese. [ 24 ]
A purely physical Vietnamese keyboard would be impractical, due to the sheer number of letter-diacritic-diacritic combinations in the alphabet e.g. ờ, ị. Instead, Vietnamese input relies on formulaic software-based keyboard layouts, virtual keyboards , or input methods (also known as IMEs).
Vietnamese keyboard layouts rely on dead keys to compose letters with diacritics. Most desktop operating systems include a Vietnamese keyboard layout similar to TCVN 6064:1995 [ vi ] , a Vietnamese national standard. Previously, typewriters used an AZERTY-based Vietnamese layout (AĐERTY). [ 25 ]
The three most common Vietnamese input methods are Telex , VNI , and VIQR . Telex indicates diacritics using letters that are unlikely to appear at the end of a word, while VNI repurposes the number keys or function keys and VIQR repurposes various punctuation marks. The Telex and VIQR conventions originated in an earlier era of telex machines and typewriters, respectively.
Support for these input methods is provided by input method editors (IMEs), which are known in Vietnamese as bộ gõ , literally "peckers", "typing sets" or "percussion" in more general terms. IMEs may be provided by the operating system, installed as a third-party application, installed as a browser extension , or provided by an individual website in the form of a script . Common third-party applications include GoTiengViet, UniKey , VietKey, VPSKeys , WinVNKey , and xvnkb. On Unix-like operating systems, the IBus and SCIM frameworks both support Vietnamese. IME scripts such as AVIM, Mudim, and VietTyping can be found on most Vietnamese message boards , the Vietnamese Wikipedia , and other text-intensive websites. The Vietnamese Web browser Cốc Cốc comes with an input method built-in.
Input methods allow words to be composed in a more flexible order than keyboard layouts allow. For example, to enter the word " viết " using the TCVN 6064:1995 keyboard layout, one must type V I 3 8 T , in that order. By contrast, most IMEs permit the user to insert diacritics at the end of the word: V I E E T S in Telex, V I E T 6 1 in VNI, or V I E T ^ ' in VIQR. Some IMEs even allow diacritics to be entered before their base letters. Depending on an IME's implementation, it may also be possible to edit an existing word's diacritics without retyping the word.
Some virtual keyboards supplement the standard dead keys with dedicated shortcut keys. For example, with the VIQR keyboard built into iOS , it is possible to add a horn to "U" by tapping either 123 #+= + or the dedicated ◌̛ key, which has no analogue on a physical keyboard.
Borrowing a feature common amongst Chinese input methods , some Vietnamese IMEs allow one to skip diacritics altogether and instead, after typing the base letters, the user can select the accented word from a candidate list. In order to provide this autocomplete list, the IME may need to communicate with a Web service . Some IMEs also use candidate lists to allow the user to convert text from the Vietnamese alphabet to chữ Nôm , because there is no one-to-one correspondence between alphabetic words and nôm characters.
Typical Vietnamese text contains a high proportion of compound words. Compound words are never hyphenated in contemporary usage, so spell checkers are limited to checking individual syllables unless a statistical language model is consulted.
Vietnamese has rigid spelling rules and few exceptions, so text-to-speech engines may avoid dictionary lookups except when encountering a foreign loan word. TTS engines must account for tones , which are essential to the meaning of any Vietnamese word e.g. má (mother) is a different word to mà (but).
Internationalized user interfaces are generally unable to use the full complement of Vietnamese pronouns that would be expected in a traditional social setting, even when much is known about the user. Instead, user interfaces typically use generic pronouns such as tôi and bạn , some of which make potentially incorrect assumptions about the user's age and relationship to other users. For example, when a social media platform notifies a user about a younger user, it may refer to the latter in the third person as anh ấy instead of em ấy , leading the user to misinterpret the notification as a reference to someone else. [ 26 ]
|
https://en.wikipedia.org/wiki/Vietnamese_language_and_computers
|
In topological data analysis , the Vietoris–Rips filtration (sometimes shortened to " Rips filtration ") is the collection of nested Vietoris–Rips complexes on a metric space created by taking the sequence of Vietoris–Rips complexes over an increasing scale parameter. Often, the Vietoris–Rips filtration is used to create a discrete , simplicial model on point cloud data embedded in an ambient metric space. [ 1 ] The Vietoris–Rips filtration is a multiscale extension of the Vietoris–Rips complex that enables researchers to detect and track the persistence of topological features, over a range of parameters, by way of computing the persistent homology of the entire filtration. [ 2 ] [ 3 ] [ 4 ] It is named after Leopold Vietoris and Eliyahu Rips .
The Vietoris–Rips filtration is the nested collection of Vietoris–Rips complexes indexed by an increasing scale parameter. The Vietoris–Rips complex is a classical construction in mathematics that dates back to a 1927 paper [ 5 ] of Leopold Vietoris , though it was independently considered by Eliyahu Rips in the study of hyperbolic groups , as noted by Mikhail Gromov in the 1980s. [ 6 ] The conjoined name "Vietoris–Rips" is due to Jean-Claude Hausmann. [ 7 ]
Given a metric space X {\displaystyle X} and a scale parameter (sometimes called the threshold or distance parameter ) r ∈ [ 0 , ∞ ) {\displaystyle r\in [0,\infty )} , the Vietoris–Rips complex (with respect to r {\displaystyle r} ) is defined as V R r ( X ) = { S ⊆ X ∣ S finite ; diam S ≤ r ; S ≠ ∅ } {\displaystyle \mathbf {VR} _{r}(X)=\{S\subseteq X\mid S{\text{ finite}};\operatorname {diam} S\leq r;S\neq \emptyset \}} , where diam S {\displaystyle \operatorname {diam} S} is the diameter , i.e. the maximum distance of points lying in S {\displaystyle S} . [ 8 ]
Observe that if r ≤ s ∈ [ 0 , ∞ ) {\displaystyle r\leq s\in [0,\infty )} , there is a simplicial inclusion map V R r ( X ) ↪ V R s ( X ) {\displaystyle \mathbf {VR} _{r}(X)\hookrightarrow \mathbf {VR} _{s}(X)} . The Vietoris–Rips filtration is the nested collection of complexes V R r ( X ) {\displaystyle \mathbf {VR} _{r}(X)} :
V R ( X ) = { V R r ( X ) } r ∈ [ 0 , ∞ ) {\displaystyle \mathbf {VR} (X)=\{\mathbf {VR} _{r}(X)\}_{r\in [0,\infty )}}
If the non-negative real numbers [ 0 , ∞ ) {\displaystyle [0,\infty )} are viewed as a posetal category via the ≤ {\displaystyle \leq } relation , then the Vietoris–Rips filtration can be viewed as a functor V R ( X ) : [ 0 , ∞ ) → S i m p {\displaystyle \mathbf {VR} (X):[0,\infty )\to \mathbf {Simp} } valued in the category of simplicial complexes and simplicial maps, where the morphisms (i.e., relations in the poset) in the source category induce inclusion maps among the complexes. [ 9 ] Note that the category of simplicial complexes may be viewed as a subcategory of T o p {\displaystyle \mathbf {Top} } , the category of topological spaces , by post-composing with the geometric realization functor.
The size of a filtration refers to the number of simplices in the largest complex, assuming the underlying metric space is finite. The k {\displaystyle k} -skeleton, i.e., the number of simplices up to dimension k {\displaystyle k} , of the Vietoris–Rips filtration is known to be O ( n k + 1 ) {\displaystyle O\left(n^{k+1}\right)} , where n {\displaystyle n} is the number of points. [ 10 ] The size of the complete skeleton has precisely 2 n − 1 {\displaystyle 2^{n}-1} simplices, one for each non-empty subset of points. [ 9 ] Since this is exponential, researchers usually only compute the skeleton of the Vietoris–Rips filtration up to small values of k {\displaystyle k} . [ 2 ]
When the underlying metric space is finite, the Vietoris–Rips filtration is sometimes referred to as essentially discrete , [ 9 ] meaning that there exists some terminal or maximum scale parameter r max ∈ [ 0 , ∞ ) {\displaystyle r_{\text{max}}\in [0,\infty )} such that V R s ( X ) = V R r max ( X ) {\displaystyle \mathbf {VR} _{s}(X)=\mathbf {VR} _{r_{\max }}(X)} for all s ≥ r max {\displaystyle s\geq r_{\max }} , and furthermore that the inclusion map V R s → t ( X ) : V R s ( X ) ↪ V R t ( X ) {\displaystyle \mathbf {VR} _{s\to t}(X):\mathbf {VR} _{s}(X)\hookrightarrow \mathbf {VR} _{t}(X)} is an isomorphism for all but finitely many parameters s ≤ t {\displaystyle s\leq t} . In other words, when the underlying metric space is finite, the Vietoris–Rips filtration has a largest complex, and the complex changes at only a finite number of steps. The latter implies that the Vietoris–Rips filtration on a finite metric space can be considered as indexed over a discrete set such as N {\displaystyle \mathbb {N} } , by restricting the filtration to the scale parameters at which the filtration changes, then relabeling the complexes using the natural numbers .
An explicit bound can also be given for the number of steps at which the Vietoris–Rips filtration changes. The Vietoris–Rips complex is a clique complex , meaning it is entirely determined by its 1-skeleton. [ 11 ] Therefore the number of steps at which the Vietoris–Rips filtration changes is bounded by the number of edges in the largest complex. The number of edges in the largest complex is ( n 2 ) = n ( n − 1 ) / 2 {\displaystyle {n \choose 2}=n(n-1)/2} , since all n {\displaystyle n} vertices are joined by an edge. Therefore the Vietoris–Rips filtration changes at O ( n 2 ) {\displaystyle O(n^{2})} steps, where O ( − ) {\displaystyle O(-)} denotes an asymptotic upper bound .
For points in Euclidean space , the Vietoris–Rips filtration is an approximation to the Čech filtration, in the sense of the interleaving distance. [ 1 ] This follows from the fact that for any scale parameter α {\displaystyle \alpha } , the Vietoris–Rips and Čech complexes on a finite set X {\displaystyle X} of points in Euclidean space satisfy the inclusion relationship V R α ( X ) ⊆ C ˇ e c h 2 α ( X ) ⊆ V R 2 α ( X ) {\displaystyle \mathbf {VR} _{\alpha }(X)\subseteq \operatorname {{\check {C}}ech} _{{\sqrt {2}}\alpha }(X)\subseteq \mathbf {VR} _{{\sqrt {2}}\alpha }(X)} , which is sometimes referred to as the Vietoris–Rips Lemma. [ 12 ] In general metric spaces, a straightforward application of the triangle inequality shows that V R α ( X ) ⊆ C ˇ e c h 2 α ( X ) ⊆ V R 2 α ( X ) {\displaystyle \mathbf {VR} _{\alpha }(X)\subseteq \operatorname {{\check {C}}ech} _{2\alpha }(X)\subseteq \mathbf {VR} _{2\alpha }(X)} for any scale parameter α {\displaystyle \alpha } .
Since the Vietoris–Rips filtration has an exponential number of simplices in its complete skeleton, a significant amount of research has been done on approximating the persistent homology of the Vietoris–Rips filtration using constructions of smaller size. The first work in this direction was published by computer scientist Donald Sheehy in 2012, who showed how to construct a filtration of O ( n ) {\displaystyle O(n)} size in O ( n log n ) {\displaystyle O(n\log n)} time that approximates the persistent homology of the Vietoris–Rips filtration to a desired margin of error. [ 10 ] This type of filtration is known as a S parse Vietoris–Rips filtration , since it removes points from the standard Vietoris–Rips filtration using ideas from computational geometry related to geometric spanners . [ 13 ] Since then, there have been several more efficient methods developed for approximating the Vietoris–Rips filtration, mostly using the ideas of Sheehy, but also building upon approximation schemes developed for the Čech [ 14 ] and Delaunay [ 15 ] filtrations. [ 16 ] [ 2 ]
It is known that persistent homology can be sensitive to outliers in the underlying data set. [ 17 ] To remedy this, in 2009 Gunnar Carlsson and Afra Zomorodian proposed a multidimensional version of persistence, that considers filtrations with respect to multiple parameters, such as scale and density. [ 18 ]
To that end, several multiparameter extensions of the Vietoris–Rips filtration have been developed.
|
https://en.wikipedia.org/wiki/Vietoris–Rips_filtration
|
The Vieussens valve of the coronary sinus is a prominent [ 1 ] valve at the end of the great cardiac vein , marking the commencement of the coronary sinus . [ 2 ] [ 1 ] It is often a flimsy valve composed of one to three leaflets. It is present in 80-90% of individuals. It serves as an anatomical landmark. It is clinically important because it is often an obstruction to catheters in 20% of patients. [ 3 ] [ 4 ]
This anatomy article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vieussens_valve_of_the_coronary_sinus
|
A view model or viewpoints framework in systems engineering , software engineering , and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture , software architecture , or enterprise architecture . A view is a representation of the whole system from the perspective of a related set of concerns. [ 1 ] [ 2 ]
Since the early 1990s there have been a number of efforts to prescribe approaches for describing and analyzing system architectures. A result of these efforts have been to define a set of views (or viewpoints). They are sometimes referred to as architecture frameworks or enterprise architecture frameworks , but are usually called "view models".
Usually a view is a work product that presents specific architecture data for a given system. However, the same term is sometimes used to refer to a view definition , including the particular viewpoint and the corresponding guidance that defines each concrete view. The term view model is related to view definitions.
The purpose of views and viewpoints is to enable humans to comprehend very complex systems , to organize the elements of the problem and the solution around domains of expertise and to separate concerns . In the engineering of physically intensive systems, viewpoints often correspond to capabilities and responsibilities within the engineering organization. [ 3 ]
Most complex system specifications are so extensive that no single individual can fully comprehend all aspects of the specifications. Furthermore, we all have different interests in a given system and different reasons for examining the system 's specifications . A business executive will ask different questions of a system make-up than would a system implementer. The concept of viewpoints framework, therefore, is to provide separate viewpoints into the specification of a given complex system in order to facilitate communication with the stakeholders. Each viewpoint satisfies an audience with interest in a particular set of aspects of the system. Each viewpoint may use a specific viewpoint language that optimizes the vocabulary and presentation for the audience of that viewpoint. Viewpoint modeling has become an effective approach for dealing with the inherent complexity of large distributed systems.
Architecture description practices, as described in IEEE Std 1471-2000 , utilize multiple views to address several areas of concerns, each one focusing on a specific aspect of the system. Examples of architecture frameworks using multiple views include Kruchten's "4+1" view model , the Zachman Framework , TOGAF , DoDAF , and RM-ODP .
In the 1970s, methods began to appear in software engineering for modeling with multiple views. Douglas T. Ross and K.E. Schoman in 1977 introduce the constructs context, viewpoint, and vantage point to organize the modeling process in systems requirements definition. [ 4 ] According to Ross and Schoman, a viewpoint "makes clear what aspects are considered relevant to achieving ... the overall purpose [of the model]" and determines How do we look at [a subject being modelled]?
As examples of viewpoints, the paper offers: Technical, Operational and Economic viewpoints. In 1992, Anthony Finkelstein and others published a very important paper on viewpoints. [ 5 ] In that work: "A viewpoint can be thought of as a combination of the idea of an “actor”, “knowledge source”, “role” or “agent” in the development process and the idea of a “view” or “perspective” which an actor maintains." An important idea in this paper was to distinguish "a representation style , the scheme and notation by which the viewpoint expresses what it can see" and "a specification , the statements expressed in the viewpoint's style describing particular domains". Subsequent work, such as IEEE 1471 , preserved this distinction by utilizing two separate terms: viewpoint and view, respectively.
Since the early 1990s there have been a number of efforts to codify approaches for describing and analyzing system architectures. These are often termed architecture frameworks or sometimes viewpoint sets . Many of these have been funded by the United States Department of Defense , but some have sprung from international or national efforts in ISO or the IEEE . Among these, the IEEE Recommended Practice for Architectural Description of Software-Intensive Systems ( IEEE Std 1471-2000 ) established useful definitions of view, viewpoint, stakeholder and concern and guidelines for documenting a system architecture through the use of multiple views by applying viewpoints to address stakeholder concerns . [ 6 ] The advantage of multiple views is that hidden requirements and stakeholder disagreements can be discovered more readily. However, studies show that in practice, the added complexity of reconciling multiple views can undermine this advantage. [ 7 ]
IEEE 1471 (now ISO/IEC/IEEE 42010:2011 , Systems and software engineering — Architecture description ) prescribes the contents of architecture descriptions and describes their creation and use under a number of scenarios, including precedented and unprecedented design, evolutionary design, and capture of design of existing systems. In all of these scenarios the overall process is the same: identify stakeholders , elicit concerns, identify a set of viewpoints to be used, and then apply these viewpoint specifications to develop the set of views relevant to the system of interest. Rather than define a particular set of viewpoints, the standard provides uniform mechanisms and requirements for architects and organizations to define their own viewpoints. In 1996 the ISO Reference Model for Open Distributed Processing ( RM-ODP ) was published to provide a useful framework for describing the architecture and design of large-scale distributed systems.
A view of a system is a representation of the system from the perspective of a viewpoint. This viewpoint on a system involves a perspective focusing on specific concerns regarding the system, which suppresses details to provide a simplified model having only those elements related to the concerns of the viewpoint. For example, a security viewpoint focuses on security concerns and a security viewpoint model contains those elements that are related to security from a more general model of a system. [ 8 ]
A view allows a user to examine a portion of a particular interest area. For example, an Information View may present all functions, organizations, technology, etc. that use a particular piece of information, while the Organizational View may present all functions, technology, and information of concern to a particular organization. In the Zachman Framework views comprise a group of work products whose development requires a particular analytical and technical expertise because they focus on either the “what,” “how,” “who,” “where,” “when,” or “why” of the enterprise. For example, Functional View work products answer the question “how is the mission carried out?” They are most easily developed by experts in functional decomposition using process and activity modeling. They show the enterprise from the point of view of functions. They also may show organizational and information components, but only as they relate to functions. [ 9 ]
In systems engineering, a viewpoint is a partitioning or restriction of concerns in a system. Adoption of a viewpoint is usable so that issues in those aspects can be addressed separately. A good selection of viewpoints also partitions the design of the system into specific areas of expertise. [ 3 ]
Viewpoints provide the conventions, rules, and languages for constructing, presenting and analysing views. In ISO/IEC 42010:2007 ( IEEE-Std-1471-2000 ) a viewpoint is a specification for an individual view. A view is a representation of a whole system from the perspective of a viewpoint. A view may consist of one or more architectural models . [ 10 ] Each such architectural model is developed using the methods established by its associated architectural system, as well as for the system as a whole. [ 6 ]
Modeling perspectives is a set of different ways to represent pre-selected aspects of a system. Each perspective has a different focus, conceptualization, dedication and visualization of what the model is representing.
In information systems , the traditional way to divide modeling perspectives is to distinguish the structural, functional and behavioral/processual perspectives. This together with rule, object, communication and actor and role perspectives is one way of classifying modeling approaches [ 11 ]
In any given viewpoint, it is possible to make a model of the system that contains only the objects that are visible from that viewpoint, but also captures all of the objects, relationships and constraints that are present in the system and relevant to that viewpoint. Such a model is said to be a viewpoint model, or a view of the system from that viewpoint. [ 3 ]
A given view is a specification for the system at a particular level of abstraction from a given viewpoint. Different levels of abstraction contain different levels of detail. Higher-level views allow the engineer to fashion and comprehend the whole design and identify and resolve problems in the large. Lower-level views allow the engineer to concentrate on a part of the design and develop the detailed specifications. [ 3 ]
In the system itself, however, all of the specifications appearing in the various viewpoint models must be addressed in the realized components of the system. And the specifications for any given component may be drawn from many different viewpoints. On the other hand, the specifications induced by the distribution of functions over specific components and component interactions will typically reflect a different partitioning of concerns than that reflected in the original viewpoints. Thus additional viewpoints, addressing the concerns of the individual components and the bottom-up synthesis of the system, may also be useful. [ 3 ]
An architecture description is a representation of a system architecture, at any time, in terms of its component parts, how those parts function, the rules and constraints under which those parts function, and how those parts relate to each other and to the environment. In an architecture description the architecture data is shared across several views and products.
At the data layer are the architecture data elements and their defining attributes and relationships. At the presentation layer are the products and views that support a visual means to communicate and understand the purpose of the architecture, what it describes, and the various architectural analyses performed. Products provide a way for visualizing architecture data as graphical, tabular, or textual representations. Views provide the ability to visualize architecture data that stem across products, logically organizing the data for a specific or holistic perspective of the architecture.
The Three-schema approach for data modeling, introduced in 1977, can be considered one of the first view models. It is an approach to building information systems and systems information management, that promotes the conceptual model as the key to achieving data integration . [ 13 ] The Three schema approach defines three schemas and views:
At the center, the conceptual schema defines the ontology of the concepts as the users think of them and talk about them. The physical schema describes the internal formats of the data stored in the database , and the external schema defines the view of the data presented to the application programs . [ 14 ] The framework attempted to permit multiple data models to be used for external schemata. [ 15 ]
Over the years, the skill and interest in building information systems has grown tremendously. However, for the most part, the traditional approach to building systems has only focused on defining data from two distinct views, the "user view" and the "computer view". From the user view, which will be referred to as the “external schema,” the definition of data is in the context of reports and screens designed to aid individuals in doing their specific jobs. The required structure of data from a usage view changes with the business environment and the individual preferences of the user. From the computer view, which will be referred to as the “internal schema,” data is defined in terms of file structures for storage and retrieval. The required structure of data for computer storage depends upon the specific computer technology employed and the need for efficient processing of data. [ 16 ]
4+1 is a view model designed by Philippe Kruchten in 1995 for describing the architecture of software-intensive systems, based on the use of multiple, concurrent views. [ 17 ] The views are used to describe the system in the viewpoint of different stakeholders, such as end-users, developers and project managers. The four views of the model are logical, development, process and physical view:
The four views of the model are concerned with :
In addition selected use cases or scenarios are utilized to illustrate the architecture. Hence the model contains 4+1 views. [ 17 ]
Enterprise architecture framework defines how to organize the structure and views associated with an enterprise architecture . Because the discipline of Enterprise Architecture and Engineering is so broad, and because enterprises can be large and complex, the models associated with the discipline also tend to be large and complex. To manage this scale and complexity, an Architecture Framework provides tools and methods that can bring the task into focus and allow valuable artifacts to be produced when they are most needed.
Architecture Frameworks are commonly used in Information technology and Information system governance. An organization may wish to mandate that certain models be produced before a system design can be approved. Similarly, they may wish to specify certain views be used in the documentation of procured systems - the U.S. Department of Defense stipulates that specific DoDAF views be provided by equipment suppliers for capital project above a certain value.
The Zachman Framework , originally conceived by John Zachman at IBM in 1987, is a framework for enterprise architecture, which provides a formal and highly structured way of viewing and defining an enterprise.
The Framework is used for organizing architectural "artifacts" in a way that takes into account both who the artifact targets (for example, business owner and builder) and what particular issue (for example, data and functionality) is being addressed. These artifacts may include design documents, specifications, and models. [ 19 ]
The Zachman Framework is often referenced as a standard approach for expressing the basic elements of enterprise architecture . The Zachman Framework has been recognized by the U.S. Federal Government as having "... received worldwide acceptance as an integrated framework for managing change in enterprises and the systems that support them." [ 20 ]
The International Organization for Standardization (ISO) Reference Model for Open Distributed Processing ( RM-ODP ) [ 21 ] specifies a set of viewpoints for partitioning the design of a distributed software/hardware system. Since most integration problems arise in the design of such systems or in very analogous situations, these viewpoints may prove useful in separating integration concerns. The RMODP viewpoints are: [ 3 ]
RMODP further defines a requirement for a design to contain specifications of consistency between viewpoints, including: [ 3 ]
The Department of Defense Architecture Framework (DoDAF) defines a standard way to organize an enterprise architecture (EA) or systems architecture into complementary and consistent views. It is especially suited to large systems with complex integration and interoperability challenges, and is apparently unique in its use of " operational views " detailing the external customer's operating domain in which the developing system will operate.
The DoDAF defines a set of products that act as mechanisms for visualizing, understanding,
and assimilating the broad scope and complexities of an architecture description through graphic,
tabular, or textual means. These products are organized under four views:
Each view depicts certain perspectives of an architecture as described below. Only a subset of the full DoDAF viewset is usually created for each system development. The figure represents the information that links the operational view , systems and services view, and technical standards view. The three views and their interrelationships driven – by common architecture data elements – provide the basis for deriving measures such as interoperability or performance, and for measuring the impact of the values of these metrics on operational mission and task effectiveness. [ 22 ]
In the US Federal Enterprise Architecture enterprise, segment, and solution architecture provide different business perspectives by varying the level of detail and addressing related but distinct concerns. Just as enterprises are themselves hierarchically organized, so are the different views provided by each type of architecture. The Federal Enterprise Architecture Practice Guidance (2006) has defined three types of architecture: [ 23 ]
By definition, Enterprise Architecture (EA) is fundamentally concerned with identifying common or shared assets – whether they are strategies, business processes, investments, data, systems, or technologies. EA is driven by strategy; it helps an agency identify whether its resources are properly aligned to the agency mission and strategic goals and objectives. From an investment perspective, EA is used to drive decisions about the IT investment portfolio as a whole. Consequently, the primary stakeholders of the EA are the senior managers and executives tasked with ensuring the agency fulfills its mission as effectively and efficiently as possible. [ 23 ]
By contrast, segment architecture defines a simple roadmap for a core mission area, business service, or enterprise service. Segment architecture is driven by business management and delivers products that improve the delivery of services to citizens and agency staff. From an investment perspective, segment architecture drives decisions for a business case or group of business cases supporting a core mission area or common or shared service. The primary stakeholders for segment architecture are business owners and managers. Segment architecture is related to EA through three principles: structure, reuse, and alignment. First, segment architecture inherits the framework used by the EA, although it may be extended and specialized to meet the specific needs of a core mission area or common or shared service. Second, segment architecture reuses important assets defined at the enterprise level including: data; common business processes and investments; and applications and technologies. Third, segment architecture aligns with elements defined at the enterprise level, such as business strategies, mandates, standards, and performance measures. [ 23 ]
In search of "Framework for Modeling Space Systems Architectures" Peter Shames and Joseph Skipper (2006) defined a "nominal set of views", [ 6 ] Derived from CCSDS RASDS, RM-ODP, ISO 10746 and compliant with IEEE 1471 .
This "set of views", as described below, is a listing of possible modeling viewpoints. Not all of these views may be used for any one project and other views may be defined as necessary. Note that for some analyses elements from multiple viewpoints may be combined into a new view, possibly using a layered representation.
In a latter presentation this nominal set of views was presented as an Extended RASDS Semantic Information Model Derivation. [ 24 ] Hereby RASDS stands for Reference Architecture for Space Data Systems. see second image.
In contrast to the previous listed view models, this "nominal set of views" lists a whole range of views, possible to develop powerful and extensible approaches for describing a general class of software intensive system architectures. [ 6 ]
This article incorporates public domain material from the National Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/View_model
|
ViiV Healthcare ( / ˈ v iː v / VEEV ) is a British multinational pharmaceutical company specializing in the research and development of medicines to treat and prevent HIV/AIDS . Its global headquarters is located in London . The company was created as a joint venture by GSK and Pfizer in November 2009, with both companies transferring their HIV assets to the new company. [ 1 ] In 2012, Shionogi joined the company. As of December 2023, 76.5% of the company is owned by GSK, 13.5% by Pfizer and 10% by Shionogi. [ 2 ] According to The Financial Times, the company's co-ownership structure may change depending on the achievement of certain milestones. [ 1 ]
ViiV Healthcare's products have a market share of approximately 32% of the global HIV therapy market, making it the second-largest healthcare company in the sector, after Gilead Sciences . [ 3 ]
ViiV Healthcare's global headquarters are in London in the United Kingdom , and the company has sites in a number of other countries including the United States, Australia, Belgium, Canada, France, Germany, Italy, Japan, Mexico, the Netherlands, Portugal, Puerto Rico, Russia, Spain and Switzerland. [ 4 ]
The company markets 17 products: [ 5 ]
ViiV Healthcare has stated that it will continue the not-for-profit pricing schemes that Pfizer and GlaxoSmithKline had been involved in prior to the setting up of the company. This program covers all low- and middle-income countries, as well as all of Sub-Saharan Africa . [ 7 ]
The company has also granted voluntary licenses to 14 generics companies to enable the low-cost manufacture and sale of generic versions of the company's products in specific countries and/or regions. [ 7 ] [ 8 ]
In March 2020, ViiV Healthcare announced the initiation of a study in partnership with University of South Carolina 's Ryan White Program to determine the effectiveness of ride-sharing services in improving access to care for people living with HIV. [ 9 ]
|
https://en.wikipedia.org/wiki/ViiV_Healthcare
|
The Viking program consisted of a pair of identical American space probes , Viking 1 and Viking 2 both launched in 1975, and landed on Mars in 1976. [ 1 ] The mission effort began in 1968 and was managed by the NASA Langley Research Center. [ 4 ] Each spacecraft was composed of two main parts: an orbiter which photographed the surface of Mars from orbit , and a lander which studied the planet from the surface. The orbiters also served as communication relays for the landers once they touched down.
The Viking program grew from NASA 's earlier, even more ambitious, Voyager Mars program, which was not related to the successful Voyager deep space probes of the late 1970s. Viking 1 was launched on August 20, 1975, and the second craft, Viking 2 , was launched on September 9, 1975, both riding atop Titan IIIE rockets with Centaur upper stages. Viking 1 entered Mars orbit on June 19, 1976, with Viking 2 following on August 7.
After orbiting Mars for more than a month and returning images used for landing site selection, the orbiters and landers detached; the landers then entered the Martian atmosphere and soft-landed at the sites that had been chosen. The Viking 1 lander touched down on the surface of Mars on July 20, 1976, more than two weeks before Viking 2 ' s arrival in orbit. Viking 2 then successfully soft-landed on September 3. The orbiters continued imaging and performing other scientific operations from orbit while the landers deployed instruments on the surface. The program terminated in 1982.
The project cost was roughly US$1 billion at the time of launch, [ 5 ] [ 6 ] equivalent to about $6 billion in 2023 dollars. [ 7 ] The mission was considered successful and formed most of the body of knowledge about Mars through the late 1990s and early 2000s. [ 8 ] [ 9 ]
The primary objectives of the two Viking orbiters were to transport the landers to Mars, perform reconnaissance to locate and certify landing sites, act as communications relays for the landers, and to perform their own scientific investigations. Each orbiter, based on the earlier Mariner 9 spacecraft, was an octagon approximately 2.5 m (8.2 ft) across. The fully fueled orbiter-lander pair had a mass of 3,527 kg (7,776 lb). After separation and landing, the lander had a mass of about 600 kg (1,300 lb) and the orbiter 900 kg (2,000 lb). The total launch mass was 2,328 kg (5,132 lb), of which 1,445 kg (3,186 lb) were propellant and attitude control gas. The eight faces of the ring-like structure were 0.457 m (18 in) high and were alternately 1.397 and 0.508 m (55 and 20 in) wide. The overall height was 3.29 m (10.8 ft) from the lander attachment points on the bottom to the launch vehicle attachment points on top. There were 16 modular compartments, 3 on each of the 4 long faces and one on each short face. Four solar panel wings extended from the axis of the orbiter, the distance from tip to tip of two oppositely extended solar panels was 9.75 m (32 ft).
The main propulsion unit was mounted above the orbiter bus . Propulsion was furnished by a bipropellant ( monomethylhydrazine and nitrogen tetroxide ) liquid-fueled rocket engine which could be gimballed up to 9 degrees . The engine was capable of 1,323 N (297 lbf ) thrust, providing a change in velocity of 1,480 m/s (3,300 mph). Attitude control was achieved by 12 small compressed-nitrogen jets.
An acquisition Sun sensor , a cruise Sun sensor, a Canopus star tracker and an inertial reference unit consisting of six gyroscopes allowed three-axis stabilization. Two accelerometers were also on board.
Communications were accomplished through a 20 W S-band (2.3 GHz ) transmitter and two 20 W TWTAs . An X band (8.4 GHz) downlink was also added specifically for radio science and to conduct communications experiments. Uplink was via S band (2.1 GHz). A two-axis steerable parabolic dish antenna with a diameter of approximately 1.5 m was attached at one edge of the orbiter base, and a fixed low-gain antenna extended from the top of the bus. Two tape recorders were each capable of storing 1280 megabits . A 381- MHz relay radio was also available. [ citation needed ]
The power to the two orbiter craft was provided by eight 1.57 m × 1.23 m (62 in × 48 in) solar panels , two on each wing. The solar panels comprised a total of 34,800 solar cells and produced 620 W of power at Mars. Power was also stored in two nickel-cadmium 30- A·h batteries .
The combined area of the four panels was 15 square meters (160 square feet), and they provided both regulated and unregulated direct current power; unregulated power was provided to the radio transmitter and the lander.
Two 30-amp·hour, nickel-cadmium, rechargeable batteries provided power when the spacecraft was not facing the Sun, during launch, while performing correction maneuvers and also during Mars occultation. [ 10 ]
By discovering many geological forms that are typically formed from large amounts of water, the images from the orbiters caused a revolution in our ideas about water on Mars . Huge river valleys were found in many areas. They showed that floods of water broke through dams, carved deep valleys, eroded grooves into bedrock, and travelled thousands of kilometers. Large areas in the southern hemisphere contained branched stream networks, suggesting that rain once fell. The flanks of some volcanoes are believed to have been exposed to rainfall because they resemble those caused on Hawaiian volcanoes. Many craters look as if the impactor fell into mud. When they were formed, ice in the soil may have melted, turned the ground into mud, then flowed across the surface. Normally, material from an impact goes up, then down. It does not flow across the surface, going around obstacles, as it does on some Martian craters. [ 11 ] [ 12 ] [ 13 ] Regions, called " Chaotic Terrain ," seemed to have quickly lost great volumes of water, causing large channels to be formed. The amount of water involved was estimated to ten thousand times the flow of the Mississippi River . [ 14 ] Underground volcanism may have melted frozen ice; the water then flowed away and the ground collapsed to leave chaotic terrain.
Each lander comprised a six-sided aluminium base with alternate 1.09 and 0.56 m (43 and 22 in) long sides, supported on three extended legs attached to the shorter sides. The leg footpads formed the vertices of an equilateral triangle with 2.21 m (7.3 ft) sides when viewed from above, with the long sides of the base forming a straight line with the two adjoining footpads. Instrumentation was attached inside and on top of the base, elevated above the surface by the extended legs. [ 15 ]
Each lander was enclosed in an aeroshell heat shield designed to slow the lander down during the entry phase. To prevent contamination of Mars by Earth organisms, each lander, upon assembly and enclosure within the aeroshell, was enclosed in a pressurized "bioshield" and then sterilized at a temperature of 111 °C (232 °F) for 40 hours. For thermal reasons, the cap of the bioshield was jettisoned after the Centaur upper stage powered the Viking orbiter/lander combination out of Earth orbit. [ 16 ]
Astronomer Carl Sagan helped to choose landing sites for both Viking probes. [ 17 ]
Each lander arrived at Mars attached to the orbiter. The assembly orbited Mars many times before the lander was released and separated from the orbiter for descent to the surface. Descent comprised four distinct phases, starting with a deorbit burn . The lander then experienced atmospheric entry with peak heating occurring a few seconds after the start of frictional heating with the Martian atmosphere. At an altitude of about 6 kilometers (3.7 miles) and traveling at a velocity of 900 kilometers per hour (600 mph), the parachute deployed, the aeroshell released and the lander's legs unfolded. At an altitude of about 1.5 kilometers (5,000 feet), the lander activated its three retro-engines and was released from the parachute. The lander then immediately used retrorockets to slow and control its descent, with a soft landing on the surface of Mars. [ 18 ]
At landing (after using rocket propellant) the landers had a mass of about 600 kg.
Propulsion for deorbit was provided by the monopropellant hydrazine (N 2 H 4 ), through a rocket with 12 nozzles arranged in four clusters of three that provided 32 newtons (7.2 lb f ) thrust, translating to a change in velocity of 180 m/s (590 ft/s). These nozzles also acted as the control thrusters for translation and rotation of the lander.
Terminal descent (after use of a parachute ) and landing used three (one affixed on each long side of the base, separated by 120 degrees) monopropellant hydrazine engines. The engines had 18 nozzles to disperse the exhaust and minimize effects on the ground, and were throttleable from 276 to 2,667 newtons (62 to 600 lb f ). The hydrazine was purified in order to prevent contamination of the Martian surface with Earth microbes . The lander carried 85 kg (187 lb) of propellant at launch, contained in two spherical titanium tanks mounted on opposite sides of the lander beneath the RTG windscreens, giving a total launch mass of 657 kg (1,448 lb). Control was achieved through the use of an inertial reference unit , four gyros , a radar altimeter , a terminal descent and landing radar , and the control thrusters.
Power was provided by two radioisotope thermoelectric generator (RTG) units containing plutonium-238 affixed to opposite sides of the lander base and covered by wind screens. Each Viking RTG [ 19 ] was 28 cm (11 in) tall, 58 cm (23 in) in diameter, had a mass of 13.6 kg (30 lb) and provided 30 watts of continuous power at 4.4 volts. Four wet cell sealed nickel-cadmium 8 Ah (28,800 coulombs ), 28 volt rechargeable batteries were also on board to handle peak power loads.
Communications were accomplished through a 20-watt S-band transmitter using two traveling-wave tubes . A two-axis steerable high-gain parabolic antenna was mounted on a boom near one edge of the lander base. An omnidirectional low-gain S-band antenna also extended from the base. Both these antennae allowed for communication directly with the Earth, permitting Viking 1 to continue to work long after both orbiters had failed. A UHF (381 MHz) antenna provided a one-way relay to the orbiter using a 30 watt relay radio. Data storage was on a 40-Mbit tape recorder, and the lander computer had a 6000- word memory for command instructions.
The lander carried instruments to achieve the primary scientific objectives of the lander mission: to study the biology , chemical composition ( organic and inorganic ), meteorology , seismology , magnetic properties, appearance, and physical properties of the Martian surface and atmosphere. Two 360-degree cylindrical scan cameras were mounted near one long side of the base. From the center of this side extended the sampler arm, with a collector head, temperature sensor , and magnet on the end. A meteorology boom, holding temperature, wind direction, and wind velocity sensors extended out and up from the top of one of the lander legs. A seismometer , magnet and camera test targets , and magnifying mirror are mounted opposite the cameras, near the high-gain antenna. An interior environmentally controlled compartment held the biology experiment and the gas chromatograph mass spectrometer. The X-ray fluorescence spectrometer was also mounted within the structure. A pressure sensor was attached under the lander body. The scientific payload had a total mass of approximately 91 kg (201 lb).
The Viking landers conducted biological experiments designed to detect life in the Martian soil (if it existed) with experiments designed by three separate teams, under the direction of chief scientist Gerald Soffen of NASA. One experiment turned positive for the detection of metabolism (current life), but based on the results of the other two experiments that failed to reveal any organic molecules in the soil, most scientists became convinced that the positive results were likely caused by non-biological chemical reactions from highly oxidizing soil conditions. [ 20 ]
Although there was a pronouncement by NASA during the mission saying that the Viking lander results did not demonstrate conclusive biosignatures in soils at the two landing sites, the test results and their limitations are still under assessment. The validity of the positive 'Labeled Release' (LR) results hinged entirely on the absence of an oxidative agent in the Martian soil, but one was later discovered by the Phoenix lander in the form of perchlorate salts. [ 21 ] [ 22 ] It has been proposed that organic compounds could have been present in the soil analyzed by both Viking 1 and Viking 2 , but remained unnoticed due to the presence of perchlorate, as detected by Phoenix in 2008. [ 23 ] Researchers found that perchlorate will destroy organics when heated and will produce chloromethane and dichloromethane , the identical chlorine compounds discovered by both Viking landers when they performed the same tests on Mars. [ 24 ]
The question of microbial life on Mars remains unresolved. Nonetheless, on April 12, 2012, an international team of scientists reported studies, based on mathematical speculation through complexity analysis of the Labeled Release experiments of the 1976 Viking Mission, that may suggest the detection of "extant microbial life on Mars." [ 25 ] [ 26 ] In addition, new findings from re-examination of the Gas Chromatograph Mass Spectrometer (GCMS) results were published in 2018. [ 27 ]
The leader of the imaging team was Thomas A. Mutch , a geologist at Brown University in Providence, Rhode Island . The camera uses a movable mirror to illuminate 12 photodiodes . Each of the 12 silicon diodes are designed to be sensitive to different frequencies of light.
Several broad band diodes (designated BB1, BB2, BB3, and BB4) are placed to focus accurately at distances between six and 43 feet away from the lander. [ 28 ] A low resolution broad band diode was named SURVEY. [ 28 ] There are also three narrow band low resolution diodes (named BLUE, GREEN and RED) for obtaining color images , and another three (IR1, IR2, and IR3) for infrared imagery. [ 28 ]
The cameras scanned at a rate of five vertical scan lines per second, each composed of 512 pixels. The 300 degree panorama images were composed of 9150 lines. The cameras' scan was slow enough that in a crew shot taken during development of the imaging system several members show up several times in the shot as they moved themselves as the camera scanned. [ 29 ] [ 30 ]
The Viking landers used a Guidance, Control and Sequencing Computer (GCSC) consisting of two Honeywell HDC 402 24-bit computers with 18K of plated-wire memory , while the Viking orbiters used a Command Computer Subsystem (CCS) using two custom-designed 18-bit serial processors. [ 32 ] [ 33 ] [ 34 ]
The two orbiters cost US$217 million at the time, which is about $1 billion in 2023 dollars. [ 35 ] [ 36 ] The most expensive single part of the program was the lander's life-detection unit, which cost about $60 million then or $400 million in 2023 dollars. [ 35 ] [ 36 ] Development of the Viking lander design cost $357 million. [ 35 ] This was decades before NASA's "faster, better, cheaper" approach, and Viking needed to pioneer unprecedented technologies under national pressure brought on by the Cold War and the aftermath of the Space Race , all under the prospect of possibly discovering extraterrestrial life for the first time. [ 35 ] The experiments had to adhere to a special 1971 directive that mandated that no single failure shall stop the return of more than one experiment—a difficult and expensive task for a device with over 40,000 parts. [ 35 ]
The Viking camera system cost $27.3 million to develop, or about $200 million in 2023 dollars. [ 35 ] [ 36 ] When the Imaging system design was completed, it was difficult to find anyone who could manufacture its advanced design. [ 35 ] The program managers were later praised for fending off pressure to go with a simpler, less advanced imaging system, especially when the views rolled in. [ 35 ] The program did however save some money by cutting out a third lander and reducing the number of experiments on the lander. [ 35 ]
Overall NASA says that $1 billion in 1970s dollars was spent on the program, [ 5 ] [ 6 ] which when inflation-adjusted to 2023 dollars is about $6 billion. [ 36 ]
The Viking program ended on May 21, 1983. To prevent an imminent impact with Mars the orbit of Viking 1 orbiter was raised on August 7, 1980, before it was shut down 10 days later. Impact and potential contamination on the planet's surface is possible from 2019 onwards. [ 5 ]
The Viking 1 lander was found to be about 6 kilometers from its planned landing site by the Mars Reconnaissance Orbiter in December 2006. [ 37 ]
Each Viking lander carried a tiny dot of microfilm containing the names of several thousand people who had worked on the mission. [ 38 ] Several earlier space probes had carried message artifacts, such as the Pioneer plaque and the Voyager Golden Record . Later probes also carried memorials or lists of names, such as the Perseverance rover which recognizes the almost 11 million people who signed up to include their names on the mission.
|
https://en.wikipedia.org/wiki/Viking_program
|
Viktor Schauberger ( Austrian German: [ˈʃaʊbɛrɡɐ] ; 30 June 1885 – 25 September 1958) was an Austrian forest caretaker, naturalist , philosopher , inventor and pseudoscientist . [ 1 ] [ 2 ]
Schauberger was born in Holzschlag , Upper Austria on 30 June 1885. His parents were Leopold Schauberger and Josefa, née Klimitsch. From 1891 to 1897 he attended the elementary school in Aigen, then until 1900 the state grammar school in Linz. Until 1904 he went to the forestry school in Aggsbach in the Kartause Aggsbach, where he passed the exam as a forester. From 1904 to 1906 he was a forest clerk in Groß-Schweinbarth in Lower Austria .
|
https://en.wikipedia.org/wiki/Viktor_Schauberger
|
Viktor Vladimirovich Wagner , also Vagner ( Russian : Виктор Владимирович Вагнер ) (4 November 1908 – 15 August 1981) was a Russian mathematician , best known for his work in differential geometry and on semigroups .
Wagner was born in Saratov and studied at Moscow State University , where Veniamin Kagan was his advisor. He became the first geometry chair at Saratov State University . He received the Lobachevsky Medal in 1937.
Wagner was also awarded "the Order of Lenin , the Order of the Red Banner , and the title of Honoured Scientist RSFSR. Moreover, he was also accorded that rarest of privileges in the USSR: permission to travel abroad." [ 1 ]
Wagner is credited with noting that the collection of partial transformations on a set X forms a semigroup P T X {\displaystyle {\mathcal {PT}}_{X}} which is a subsemigroup of the semigroup B X {\displaystyle {\mathcal {B}}_{X}} of binary relations on the same set X , where the semigroup operation is composition of relations . "This simple unifying observation, which is nevertheless an important psychological hurdle, is attributed by Schein (1986) to V.V. Wagner." [ 2 ]
|
https://en.wikipedia.org/wiki/Viktor_Wagner
|
Villy Sundström (born February 6, 1949) is a Swedish physical chemist known for his work in ultrafast science and molecular photochemistry using time-resolved laser and X-ray spectroscopy techniques. [ 1 ] [ 2 ]
Sundström studied chemistry at Umeå University , obtaining his PhD in 1977. During his study, he visited Bell Labs and worked under Peter Rentzepis . Upon his return to Sweden, he started building the first ultrafast spectroscopy laboratory in Scandinavia at Umeå University and later at Lund University in Sweden. [ 3 ] In 1994, Sundström was appointed professor of Chemical Dynamics and head of the Chemical Physics Department at Lund University . His group's research centers on the photophysics and photochemical processes in model systems of natural and artificial photosynthetic light harvesting, such as bacteriochlorophyll , carotenoids , transition metal complexes , organic and perovskite solar cells .
Sundström was an editor of the journal Chemical Physics Letters .
|
https://en.wikipedia.org/wiki/Villy_Sundström
|
Vilma Lucila Espín Guillois (7 April 1930 – 18 June 2007) was a Cuban revolutionary , feminist , and chemical engineer . She helped supply and organize the 26th of July Movement as an underground spy, and took an active role in many branches of the Cuban government from the conclusion of the revolution to her death. [ 2 ] Espín helped found the Federation of Cuban Women and promoted equal rights for Cuban women in all spheres of life. [ 3 ]
As the wife of Raúl Castro and the sister-in-law of Fidel Castro , she was essentially the First Lady of Cuba for about 45 years.
Vilma Espín Guillois was born on 7 April 1930, in Santiago de Cuba . [ 4 ] She was the daughter of a wealthy Cuban lawyer , Jose Espín and wife Margarita Guillois. She had four siblings, Nilsa, Iván, Sonia and José. [ 5 ] Espín attended Academia Pérez-Peña for primary school and studied ballet and singing at the Asociación Pro-Arte Cubano during the 1940s. [ 6 ] In the 1950s, she studied chemical engineering at Universidad de Oriente, Santiago de Cuba (one of the first women in Cuba to study this subject). [ 4 ] While attending Universidad de Oriente, Santiago de Cuba, she played volleyball, tennis, and was a soprano in the University Choir. [ 7 ] In university, Espin met her mentor Frank Pais in a university group called Oriente Revolutionary Action (ARO), which was responsible for the assault on the Moncada barracks. [ 7 ] After graduating, her father encouraged her to attend MIT in Cambridge, Massachusetts to complete her post-graduate studies in the hopes that visiting America would dissuade her from becoming involved in socialist activity. [ 8 ] When she finally acquiesced, her brief academic career at MIT left her with even more animosity toward the United States , as she officially joined the 26th of July Movement on her way back to Cuba through Mexico . [ 3 ] Espin only completed one semester at MIT. [ 4 ]
Returning home, she became more involved with the opposition to the dictator Fulgencio Batista . [ 4 ] A meeting with revolutionary leader Frank País led her to become a leader of the revolutionary movement in Oriente province. Espín met the Castro brothers who had relocated to Mexico after their failed armed attack on the Moncada Barracks in July 1953 and release from prison in 1955. Espin acted as a messenger between the Julio 26 Movement in Mexico and Pais back in Cuba. She then went on to assist the revolutionaries in the Sierra Maestra mountains after the 26th of July Movement's return to Cuba on the Granma yacht in November 1956.
Espín's ability to speak both Spanish and English allowed her to represent the revolutionary movement on an international scale. [ 9 ] [ 10 ] Pepín Bosch, an executive of the Bacardi Corporation , arranged a meeting between CIA Inspector General Lyman Kirkpatrick and representatives of the 26th of July Movement in 1957. Espín, as both a revolutionary leader and the daughter of a Bacardi executive, told Kirkpatrick that the revolutionaries only wanted "what you Americans have: clean politics and a clean police system." [ 8 ] She also acted as an interpreter for an interview between New York Times reporter Herbert Matthews and Fidel Castro in 1957, which served the dual purpose of spreading news of the revolution and assuring Cubans and the international community that Batista's claims of Castro's death were false. [ 9 ]
Vilma Espín was an outspoken supporter of gender equality in Cuba , [ 9 ] but distinctly separated herself and the goals of the Federation of Cuban Women from traditional feminism, insisting advocacy for 'feminine' not 'feminist'. [ 7 ] Her involvement in the revolution helped transform the role of women in Cuba and in 1960, Espín became the president of the Federation of Cuban Women , and remained in that position until her death in 2007. The organization's primary goals were educating women, giving them the necessary skills to seek gainful employment, and above all encouraging them to participate in politics and support the revolutionary government. [ 3 ] In 1960, when sugar mills and cane fields were under attack across Cuba shortly before the Bay of Pigs invasion, the Federation of Cuban Women created the Emergency Medical Response Brigades to mobilize women against counter-revolution. [ 11 ] The Cuban government and the Federation encouraged women to join the labor force, even going so far as to pass the Cuban Family Code in 1975, a law mandating that men must help with household chores and childcare to lighten the workload for working mothers. [ 9 ]
Espín served as a member of the Central Committee of the Cuban Communist Party from 1965 to 1989. [ 12 ] She also held many other roles in the Cuban government , including chair of the Commission for Social Prevention from 1967 to 1971, director of Industrial Development in the Ministry of Food in 1969, president of the Institute of Childcare in 1971, and member of the Cuban Council of State in 1976. [ 12 ] [ 2 ] In addition to her roles within Cuba, Espín also served as Cuba's representative at the United Nations General Assembly . [ 13 ]
Espín took on the role of Cuba's First Lady for 45 years, initially taking on the role as the sister-in-law to Fidel Castro , who was divorced at the time he came to power. [ 14 ] She officially became the First Lady in 2006 when her husband, Raúl Castro , became president. [ 13 ] Additionally, she was granted the title of "Secretary of State" in the Government of Cuba . [ 1 ]
Espín headed the Cuban Delegation to the Congress of the International Federation of Democratic Women in Chile in September 1959. [ 3 ] She also headed the Cuban delegations to subsequent Conferences on Women, praising them as "invaluable to women in developing countries." [ 15 ]
Espín was married to Raúl Castro , the former First Secretary of the Communist Party of Cuba , who is the brother to former First Secretary Fidel Castro . Their wedding took place in 1959, only weeks after the 26th of July Movement had successfully overthrown dictator Fulgencio Batista . [ 8 ] She had four children (Deborah, Mariela, Nilsa, and Alejandro Castro Espín) and eight grandchildren. [ 4 ] Her daughter, Mariela Castro , currently heads the Cuban National Center for Sex Education , and her son, Alejandro Castro Espín , is a Colonel in the Ministry of Interior. [ 4 ]
Espín died in Havana at 4:14 p.m. EDT on 18 June 2007, following a long illness. [ 16 ] [ 17 ] An official mourning-period was declared from 8 p.m. on 18 June until 10 p.m. on 19 June. A funeral ceremony was held at the Karl Marx Theatre in Havana the day after her death. Thousands of Cubans paid their respects in a receiving line at the Plaza of the Revolution in Havana. Raúl Castro was in the receiving line, but Fidel Castro was not present. [ 4 ] The Cuban government released a statement praising her as "one of the most relevant fighters for women's emancipation in our country and in the world." [ 14 ] Her body was cremated, and her remains rest in the Frank País Mausoleum, Municipio II Frente in the province of Santiago de Cuba , Cuba. [ 18 ] The Vilma Espín elementary school was opened in Havana in April 2013. [ 19 ] Espin founded the Frente Continental de Mujeres Contra la Intervención (Continental Women’s Front Against Intervention, FCMCI) [ 20 ] and the Regional Center of the International Democratic Federation of Women for the Americas and Caribbean. [ 7 ]
|
https://en.wikipedia.org/wiki/Vilma_Espín
|
Vilnius photometric system is a medium-band seven-colour photometric system (UPXYZVS), created in 1963 by Vytautas Straižys and his coworkers. This system was highly optimized for classification of stars from ground-based observations. The system was chosen to be medium-band, to ensure the possibility to measure faint stars.
The temperature classification of early-type stars is based on Balmer jump (Balmer discontinuity). To measure it one must have two bandpasses placed in the ultraviolet , one beyond the Balmer jump (U magnitude) and another after the jump (X magnitude).
The Y bandpass is near the breakpoint of the interstellar extinction law (interstellar extinction in the 300–800 nm region can be approximated by two straight lines , which intersect at ~435.5 nm).
The P magnitude is placed exactly on the Balmer jump in order to provide separation for luminosity classes of B - A - F stars .
The Z magnitude is placed on the Mg I triplet and the MgH molecular band . It is sensitive to the luminosity classes of G - K - M stars .
The S bandpass coincides with H-alpha line position and provides information about emission or absorption phenomena in that line.
Finally, the V magnitude is chosen to coincide with a similar bandpass in the UBV system . It provides the possibility to relate these two photometric systems.
Colour indices of the system were normalized to satisfy the condition:
U − P = P − X = X − Y = Y − Z = Z − V = V − S = 0 {\displaystyle U-P=P-X=X-Y=Y-Z=Z-V=V-S=0}
for un-reddened O-type stars .
The following table shows the characteristics of each of the filters used (represented colors are only approximate):
UBV photometric system
This Lithuania -related article is a stub . You can help Wikipedia by expanding it .
This astronomy -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vilnius_photometric_system
|
The Vilsmeier–Haack reaction (also called the Vilsmeier reaction ) is the chemical reaction of a substituted formamide ( 1 ) with phosphorus oxychloride and an electron-rich arene ( 3 ) to produce an aryl aldehyde or ketone ( 5 ):
The reaction is named after Anton Vilsmeier and Albrecht Haack [ de ] . [ 1 ] [ 2 ] [ 3 ]
For example, benzanilide and dimethylaniline react with phosphorus oxychloride to produce an unsymmetrical diaryl ketone. [ 4 ] Similarly, anthracene is formylated at the 9-position. [ 5 ] The reaction of anthracene with N -methylformanilide, also using phosphorus oxychloride, gives 9-anthracenecarboxaldehyde :
In general, the electron-rich arene ( 3 ) must be much more active than benzene for the reaction to proceed; phenols or anilines are good substrates. [ 6 ]
The reaction of a substituted amide with phosphorus oxychloride gives a substituted chloroiminium ion ( 2 ), also called the Vilsmeier reagent . The initial product is an iminium ion ( 4b ), which is hydrolyzed to the corresponding ketone or aldehyde during workup . [ 7 ]
|
https://en.wikipedia.org/wiki/Vilsmeier–Haack_reaction
|
Viltolarsen , sold under the brand name Viltepso , is a medication used for the treatment of Duchenne muscular dystrophy (DMD). [ 2 ] [ 3 ] [ 1 ] Viltolarsen is a Morpholino antisense oligonucleotide . [ 2 ] [ 1 ]
The most common side effects include upper respiratory tract infection , injection site reaction , cough , and pyrexia (fever). [ 2 ] [ 3 ] [ 1 ]
Viltolarsen was approved for medical use in the United States in August 2020. [ 2 ] [ 3 ] After golodirsen was approved in December 2019, viltolarsen is the second approved targeted treatment for people with this type of mutation in the United States. [ 2 ] [ 4 ] Approximately 8% of people with DMD have a mutation that is amenable to exon 53 skipping. [ 2 ]
Viltolarsen is indicated for the treatment of Duchenne muscular dystrophy (DMD) in people who have a confirmed mutation of the DMD gene that is amenable to exon 53 skipping. [ 2 ] [ 1 ]
DMD is a rare genetic disorder characterized by progressive muscle deterioration and weakness. [ 2 ] It is the most common type of muscular dystrophy. [ 2 ] DMD is caused by mutations in the DMD gene that results in an absence of dystrophin, a protein that helps keep muscle cells intact. [ 2 ] The first symptoms are usually seen between three and five years of age and worsen over time. [ 2 ] DMD occurs in approximately one out of every 3,600 male infants worldwide; in rare cases, it can affect females. [ 2 ]
The most common side effects include upper respiratory tract infection, injection site reaction, cough, and pyrexia (fever). [ 2 ] [ 3 ] [ 1 ]
Although kidney toxicity was not observed in the clinical studies, the clinical experience is limited, and kidney toxicity, including potentially fatal glomerulonephritis, has been observed after administration of some antisense oligonucleotides. [ 2 ]
Viltolarsen was developed by Nippon Shinyaku and the NCNP based on pre-clinical study conducted by Toshifumi Yokota and colleagues, [ 5 ] [ 6 ] and evaluated in two clinical studies with a total of 32 participants, all of whom were male and had genetically confirmed DMD. [ 2 ] The increase in dystrophin production was established in one of those two studies, a study that included sixteen DMD participants, with eight participants receiving viltolarsen at the recommended dose. [ 2 ] In the study, dystrophin levels increased, on average, from 0.6% of normal at baseline to 5.9% of normal at week 25. [ 2 ] Trial 1 provided data for evaluation of the benefits of viltolarsen. [ 3 ] The combined populations from both trials provided data for evaluation of the side effects of viltolarsen. [ 3 ] Trial 1 was conducted at six sites in the United States and Canada and Trial 2 was conducted at five sites in Japan. [ 3 ] All participants in both trials were on a stable dose of corticosteroids for at least three months before entering the trials. [ 3 ]
The U.S. Food and Drug Administration (FDA) concluded that the applicant's data demonstrated an increase in dystrophin production that is reasonably likely to predict clinical benefit in people with DMD who have a confirmed mutation of the dystrophin gene amenable to exon 53 skipping. [ 2 ] A clinical benefit of the drug has not been established. [ 2 ] In making this decision, the FDA considered the potential risks associated with the drug, the life-threatening and debilitating nature of the disease, and the lack of available therapies. [ 2 ]
The application for viltolarsen was granted priority review designation and the FDA granted the approval to NS Pharma, Inc. [ 2 ]
Viltolarsen costs around US$733,000 per year for a person that weighs 30 kilograms (66 lb). [ 7 ]
|
https://en.wikipedia.org/wiki/Viltolarsen
|
1GK4 , 1GK6 , 1GK7 , 3G1E , 3KLT , 3S4R , 3SSU , 3SWK , 3TRT , 3UF1 , 4MCY , 4MCZ , 4MD0 , 4MD5 , 4MDJ , 4YPC , 4YV3
7431
22352
ENSG00000026025
ENSMUSG00000026728
P08670
P20152
NM_003380
NM_011701
NP_003371
NP_035831
Vimentin is a structural protein that in humans is encoded by the VIM gene . Its name comes from the Latin vimentum which refers to an array of flexible rods. [ 5 ]
Vimentin is a type III intermediate filament (IF) protein that is expressed in mesenchymal cells. IF proteins are found in all animal cells [ 6 ] as well as bacteria . [ 7 ] Intermediate filaments, along with tubulin -based microtubules and actin -based microfilaments , comprise the cytoskeleton . All IF proteins are expressed in a highly developmentally-regulated fashion; vimentin is the major cytoskeletal component of mesenchymal cells. Because of this, vimentin is often used as a marker of mesenchymally-derived cells or cells undergoing an epithelial-to-mesenchymal transition (EMT) during both normal development and metastatic progression.
The assembly of the fibrous vimentin filament that forms the cytoskeleton follows a gradual sequence. The vimentin monomer has a central α-helical domain , capped on each end by non- helical amino (head) and carboxyl (tail) domains. [ 8 ] Two monomers are likely co-translationally expressed in a way that facilitates their interaction forming a coiled-coil dimer, which is the basic subunit of vimentin assembly. [ 9 ] A pair of coiled-coil dimers connect in an antiparallel fashion to form a tetramer. Eight tetramers join to form what is known as the unit-length filament (ULF), ULFs then stick to each other and elongate followed by compaction to form the fibrous proteins. [ 10 ]
The α-helical sequences contain a pattern of hydrophobic amino acids that contribute to forming a "hydrophobic seal" on the surface of the helix. [ 8 ] In addition, there is a periodic distribution of acidic and basic amino acids that seems to play an important role in stabilizing coiled-coil dimers. [ 8 ] The spacing of the charged residues is optimal for ionic salt bridges , which allows for the stabilization of the α-helix structure. While this type of stabilization is intuitive for intrachain interactions, rather than interchain interactions, scientists have proposed that perhaps the switch from intrachain salt bridges formed by acidic and basic residues to the interchain ionic associations contributes to the assembly of the filament. [ 8 ]
Vimentin plays a significant role in supporting and anchoring the position of the organelles in the cytosol . Vimentin is attached to the nucleus , endoplasmic reticulum , and mitochondria , either laterally or terminally. [ 11 ]
The dynamic nature of vimentin is important when offering flexibility to the cell. Scientists found that vimentin provided cells with a resilience absent from the microtubule or actin filament networks, when under mechanical stress in vivo . Therefore, in general, it is accepted that vimentin is the cytoskeletal component responsible for maintaining cell integrity. (It was found that cells without vimentin are extremely delicate when disturbed with a micropuncture). [ 12 ] Transgenic mice that lack vimentin appeared normal and did not show functional differences. [ 13 ] It is possible that the microtubule network may have compensated for the absence of the intermediate network. This result supports an intimate interaction between microtubules and vimentin. Moreover, when microtubule depolymerizers were present, vimentin reorganization occurred, once again implying a relationship between the two systems. [ 12 ] On the other hand, wounded mice that lack the vimentin gene heal slower than their wild type counterparts. [ 14 ]
In essence, vimentin is responsible for maintaining cell shape, integrity of the cytoplasm, and stabilizing cytoskeletal interactions. Vimentin has been shown to eliminate toxic proteins in JUNQ and IPOD inclusion bodies in asymmetric division of mammalian cell lines . [ 15 ]
Also, vimentin is found to control the transport of low-density lipoprotein , LDL, -derived cholesterol from a lysosome to the site of esterification. [ 16 ] With the blocking of transport of LDL-derived cholesterol inside the cell, cells were found to store a much lower percentage of the lipoprotein than normal cells with vimentin. This dependence seems to be the first process of a biochemical function in any cell that depends on a cellular intermediate filament network. This type of dependence has ramifications on the adrenal cells, which rely on cholesteryl esters derived from LDL. [ 16 ]
Vimentin plays a role in aggresome formation, where it forms a cage surrounding a core of aggregated protein. [ 17 ]
In addition to its conventional intracellular localisation, vimentin can be found extracellularly. Vimentin can be expressed as a cell surface protein and have suggested roles in immune reactions. It can also be released in phosphorylated forms to the extracellular space by activated macrophages , astrocytes are also known to release vimentin. [ 18 ]
It has been used as a sarcoma tumor marker to identify mesenchyme . [ 19 ] [ 20 ] Its specificity as a biomarker has been disputed by Jerad Gardner. [ 21 ] Vimentin is present in spindle cell squameous cell carcinoma. [ 22 ] [ 23 ]
Methylation of the vimentin gene has been established as a biomarker of colon cancer and this is being utilized in the development of fecal tests for colon cancer. Statistically significant levels of vimentin gene methylation have also been observed in certain upper gastrointestinal pathologies such as Barrett's esophagus , esophageal adenocarcinoma, and intestinal type gastric cancer. [ 24 ] High levels of DNA methylation in the promoter region have also been associated with markedly decreased survival in hormone positive breast cancers. [ 25 ] Downregulation of vimentin was identified in cystic variant of papillary thyroid carcinoma using a proteomic approach. [ 26 ] See also Anti-citrullinated protein antibody for its use in diagnosis of rheumatoid arthritis .
Vimentin was discovered to be an attachment factor for SARS-CoV-2 by Nader Rahimi and colleagues. [ 27 ]
Vimentin has been shown to interact with:
The 3' UTR of Vimentin mRNA has been found to bind a 46kDa protein. [ 39 ]
|
https://en.wikipedia.org/wiki/Vimentin
|
Vinay Vithal Deodhar (3 December 1948 – 18 January 2015) was a Professor Emeritus in the Department of Mathematics at Indiana University . He worked in the area of algebraic groups and representation theory . [ 1 ]
Deodhar was born in Mumbai (Bombay), India in 1948. [ 2 ]
Deodhar earned his Ph.D. from the University of Mumbai in 1974 for his work On Central Extensions of Rational Points of Algebraic Groups done under the supervision of M. S. Raghunathan .
After his doctorate, he was invited to join the School of Mathematics of the Tata Institute of Fundamental Research . Simultaneously he was a visiting scholar at the Institute for Advanced Study (IAS) in Princeton during 1975-77 and then a visiting professor at the Australian National University in Canberra . In 1981 he was appointed to a professorship at Indiana University , Bloomington, Indiana , where he remained until his death in 2015. He spent a further period as a visiting scholar at the IAS in 1992-93. [ 3 ] [ 4 ]
This article about an Indian scientist is a stub . You can help Wikipedia by expanding it .
This article about an Asian mathematician is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vinay_V._Deodhar
|
In mathematics, Vincent's theorem —named after Alexandre Joseph Hidulphe Vincent —is a theorem that isolates the real roots of polynomials with rational coefficients.
Even though Vincent's theorem is the basis of the fastest method for the isolation of the real roots of polynomials, it was almost totally forgotten, having been overshadowed by Sturm's theorem ; consequently, it does not appear in any of the classical books on the theory of equations (of the 20th century), except for Uspensky 's book. Two variants of this theorem are presented, along with several (continued fractions and bisection) real root isolation methods derived from them.
Two versions of this theorem are presented: the continued fractions version due to Vincent, [ 1 ] [ 2 ] [ 3 ] and the bisection version due to Alesina and Galuzzi. [ 4 ] [ 5 ]
If in a polynomial equation with rational coefficients and without multiple roots, one makes successive transformations of the form
where a 1 , a 2 , a 3 , … {\displaystyle a_{1},a_{2},a_{3},\ldots } are any positive numbers greater than or equal to one, then after a number of such transformations, the resulting transformed equation either has zero sign variations or it has a single sign variation . In the first case there is no root, whereas in the second case there is a single positive real root. Furthermore, the corresponding root of the proposed equation is approximated by the finite continued fraction: [ 1 ] [ 2 ] [ 3 ]
Moreover, if infinitely many numbers a 1 , a 2 , a 3 , … {\displaystyle a_{1},a_{2},a_{3},\ldots } satisfying this property can be found, then the root is represented by the (infinite) corresponding continued fraction.
The above statement is an exact translation of the theorem found in Vincent's original papers; [ 1 ] [ 2 ] [ 3 ] however, the following remarks are needed for a clearer understanding:
Let p ( x ) be a real polynomial of degree deg( p ) that has only simple roots. It is possible to determine a positive quantity δ so that for every pair of positive real numbers a , b with | b − a | < δ {\displaystyle |b-a|<\delta } , every transformed polynomial of the form
has exactly 0 or 1 sign variations . The second case is possible if and only if p ( x ) has a single root within ( a , b ).
From equation ( 1 ) the following criterion is obtained for determining whether a polynomial has any roots in the interval ( a , b ):
Perform on p ( x ) the substitution
and count the number of sign variations in the sequence of coefficients of the transformed polynomial; this number gives an upper bound on the number of real roots p ( x ) has inside the open interval ( a , b ). More precisely, the number ρ ab ( p ) of real roots in the open interval ( a , b )—multiplicities counted—of the polynomial p ( x ) in R [ x ], of degree deg( p ), is bounded above by the number of sign variations var ab ( p ), where
As in the case of Descartes' rule of signs if var ab ( p ) = 0 it follows that ρ ab ( p ) = 0 and if var ab ( p ) = 1 it follows that ρ ab ( p ) = 1.
A special case of the Alesina–Galuzzi "a_b roots test" is Budan's "0_1 roots test" .
A detailed discussion of Vincent's theorem, its extension, the geometrical interpretation of the transformations involved and three different proofs can be found in the work by Alesina and Galuzzi. [ 4 ] [ 5 ] A fourth proof is due to Ostrowski [ 6 ] who rediscovered a special case of a theorem stated by Obreschkoff , [ 7 ] p. 81, in 1920–1923.
To prove (both versions of) Vincent's theorem Alesina and Galuzzi show that after a series of transformations mentioned in the theorem, a polynomial with one positive root eventually has one sign variation. To show this, they use the following corollary to the theorem by Obreschkoff of 1920–1923 mentioned earlier; that is, the following corollary gives the necessary conditions under which a polynomial with one positive root has exactly one sign variation in the sequence of its coefficients; see also the corresponding figure.
Consider now the Möbius transformation
and the three circles shown in the corresponding figure; assume that a / c < b / d .
From the above it becomes obvious that if a polynomial has a single positive root inside the eight-shaped figure and all other roots are outside of it, it presents one sign variation in the sequence of its coefficients. This also guarantees the termination of the process.
In his fundamental papers, [ 1 ] [ 2 ] [ 3 ] Vincent presented examples that show precisely how to use his theorem to isolate real roots of polynomials with continued fractions . However the resulting method had exponential computing time, a fact that mathematicians must have realized then, as was realized by Uspensky [ 8 ] p. 136, a century later.
The exponential nature of Vincent's algorithm is due to the way the partial quotients a i (in Vincent's theorem ) are computed. That is, to compute each partial quotient a i (that is, to locate where the roots lie on the x -axis) Vincent uses Budan's theorem as a "no roots test" ; in other words, to find the integer part of a root Vincent performs successive substitutions of the form x ← x +1 and stops only when the polynomials p ( x ) and p ( x +1) differ in the number of sign variations in the sequence of their coefficients (i.e. when the number of sign variations of p ( x +1) is decreased).
See the corresponding diagram where the root lies in the interval (5, 6). It can be easily inferred that, if the root is far away from the origin, it takes a lot of time to find its integer part this way, hence the exponential nature of Vincent's method. Below there is an explanation of how this drawback is overcome.
Vincent was the last author in the 19th century to use his theorem for the isolation of the real roots of a polynomial.
The reason for that was the appearance of Sturm's theorem in 1827, which solved the real root isolation problem in polynomial time, by defining the precise number of real roots a polynomial has in a real open interval ( a , b ). The resulting (Sturm's) method for computing the real roots of polynomials has been the only one widely known and used ever since—up to about 1980, when it was replaced (in almost all computer algebra systems ) by methods derived from Vincent's theorem , the fastest one being the Vincent–Akritas–Strzeboński (VAS) method. [ 9 ]
Serret included in his Algebra, [ 10 ] pp 363–368, Vincent's theorem along with its proof and directed all interested readers to Vincent's papers for examples on how it is used. Serret was the last author to mention Vincent's theorem in the 19th century.
In the 20th century Vincent's theorem cannot be found in any of the theory of equations books; the only exceptions are the books by Uspensky [ 8 ] and Obreschkoff , [ 7 ] where in the second there is just the statement of the theorem.
It was in Uspensky 's book [ 8 ] that Akritas found Vincent's theorem and made it the topic of his Ph.D. Thesis "Vincent's Theorem in Algebraic Manipulation", North Carolina State University, USA , 1978. A major achievement at the time was getting hold of Vincent's original paper of 1836, something that had eluded Uspensky —resulting thus in a great misunderstanding . Vincent's original paper of 1836 was made available to Akritas through the commendable efforts (interlibrary loan) of a librarian in the Library of the University of Wisconsin–Madison, USA .
Isolation of the real roots of a polynomial is the process of finding open disjoint intervals such that each contains exactly one real root and every real root is contained in some interval. According to the French school of mathematics of the 19th century, this is the first step in computing the real roots, the second being their approximation to any degree of accuracy; moreover, the focus is on the positive roots, because to isolate the negative roots of the polynomial p ( x ) replace x by − x ( x ← − x ) and repeat the process.
The continued fractions version of Vincent's theorem can be used to isolate the positive roots of a given polynomial p ( x ) of degree deg( p ). To see this, represent by the Möbius transformation
the continued fraction that leads to a transformed polynomial
with one sign variation in the sequence of its coefficients. Then, the single positive root of f ( x ) (in the interval (0, ∞)) corresponds to that positive root of p ( x ) that is in the open interval with endpoints b d {\displaystyle {\frac {b}{d}}} and a c {\displaystyle {\frac {a}{c}}} . These endpoints are not ordered and correspond to M (0) and M (∞) respectively.
Therefore, to isolate the positive roots of a polynomial, all that must be done is to compute—for each root—the variables a , b , c , d of the corresponding Möbius transformation
that leads to a transformed polynomial as in equation ( 2 ), with one sign variation in the sequence of its coefficients.
Crucial Observation: The variables a , b , c , d of a Möbius transformation
(in Vincent's theorem ) leading to a transformed polynomial—as in equation ( 2 )—with one sign variation in the sequence of its coefficients can be computed:
The "bisection part" of this all important observation appeared as a special theorem in the papers by Alesina and Galuzzi. [ 4 ] [ 5 ]
All methods described below (see the article on Budan's theorem for their historical background) need to compute (once) an upper bound, ub , on the values of the positive roots of the polynomial under consideration. Exception is the VAS method where additionally lower bounds, lb , must be computed at almost every cycle of the main loop. To compute the lower bound lb of the polynomial p ( x ) compute the upper bound ub of the polynomial x deg ( p ) p ( 1 x ) {\displaystyle x^{\deg(p)}p\left({\frac {1}{x}}\right)} and set l b = 1 u b {\displaystyle lb={\frac {1}{ub}}} .
Excellent (upper and lower) bounds on the values of just the positive roots of polynomials have been developed by Akritas, Strzeboński and Vigklas based on previous work by Doru Stefanescu. They are described in P. S. Vigklas' Ph.D. Thesis [ 12 ] and elsewhere. [ 13 ] These bounds have already been implemented in the computer algebra systems Mathematica , SageMath , SymPy , Xcas etc.
All three methods described below follow the excellent presentation of François Boulier, [ 14 ] p. 24.
Only one continued fractions method derives from Vincent's theorem . As stated above , it started in the 1830s when Vincent presented, in the papers [ 1 ] [ 2 ] [ 3 ] several examples that show how to use his theorem to isolate the real roots of polynomials with continued fractions . However the resulting method had exponential computing time. Below is an explanation of how this method evolved.
This is the second method (after VCA ) developed to handle the exponential behavior of Vincent's method.
The VAS continued fractions method is a direct implementation of Vincent's theorem. It was originally presented by Vincent from 1834 to 1938 in the papers [ 1 ] [ 2 ] [ 3 ] in an exponential form; namely, Vincent computed each partial quotient a i by a series of unit increments a i ← a i + 1, which are equivalent to substitutions of the form x ← x + 1.
Vincent's method was converted into its polynomial complexity form by Akritas, who in his 1978 Ph.D. Thesis ( Vincent's theorem in algebraic manipulation , North Carolina State University, USA) computed each partial quotient a i as the lower bound, lb , on the values of the positive roots of a polynomial. This is called the ideal positive lower root bound that computes the integer part of the smallest positive root (see the corresponding figure). To wit, now set a i ← lb or, equivalently, perform the substitution x ← x + lb , which takes about the same time as the substitution x ← x + 1.
Finally, since the ideal positive lower root bound does not exist, Strzeboński [ 15 ] introduced in 2005 the substitution x ← l b c o m p u t e d ∗ x {\displaystyle x\leftarrow lb_{computed}*x} , whenever l b c o m p u t e d > 16 {\displaystyle lb_{computed}>16} ; in general l b > l b c o m p u t e d {\displaystyle lb>lb_{computed}} and the value 16 was determined experimentally. Moreover, it has been shown [ 15 ] that the VAS ( continued fractions ) method is faster than the fastest implementation of the VCA (bisection) method, [ 16 ] a fact that was confirmed [ 17 ] independently; more precisely, for the Mignotte polynomials of high degree VAS is about 50,000 times faster than the fastest implementation of VCA.
In 2007, Sharma [ 18 ] removed the hypothesis of the ideal positive lower bound and proved that VAS is still polynomial in time.
VAS is the default algorithm for root isolation in Mathematica , SageMath , SymPy , Xcas .
For a comparison between Sturm's method and VAS use the functions realroot(poly) and time(realroot(poly)) of Xcas . By default, to isolate the real roots of poly realroot uses the VAS method; to use Sturm's method write realroot(sturm, poly). See also the External links for an application by A. Berkakis for Android devices that does the same thing.
Here is how VAS( p , M ) works, where for simplicity Strzeboński's contribution is not included:
Below is a recursive presentation of VAS( p , M ).
VAS ( p , M ):
Input : A univariate, square-free polynomial p ( x ) ∈ Z [ x ] , p ( 0 ) ≠ 0 {\displaystyle p(x)\in \mathbb {Z} [x],p(0)\neq 0} , of degree deg( p ), and the Möbius transformation
Output : A list of isolating intervals of the positive roots of p ( x ).
Remarks
We apply the VAS method to p ( x ) = x 3 − 7 x + 7 (note that: M ( x ) = x ).
List of isolation intervals: { }.
List of pairs { p , M } to be processed:
Remove the first and process it.
List of isolation intervals: { }.
List of pairs { p , M } to be processed:
Remove the first and process it.
List of isolation intervals: {( 3 / 2 , 2)}.
List of pairs { p , M } to be processed:
Remove the first and process it.
List of isolation intervals: {(1, 3 / 2 ), ( 3 / 2 , 2)}.
List of pairs { p , M } to be processed:
Remove the first and process it.
List of isolation intervals: {(1, 3 / 2 ), ( 3 / 2 , 2)}.
List of pairs { p , M } to be processed: ∅ .
Finished.
Therefore, the two positive roots of the polynomial p ( x ) = x 3 − 7 x + 7 lie inside the isolation intervals (1, 3 / 2 ) and ( 3 / 2 , 2) }. Each root can be approximated by (for example) bisecting the isolation interval it lies in until the difference of the endpoints is smaller than 10 −6 ; following this approach, the roots turn out to be ρ 1 = 1.3569 and ρ 2 = 1.69202 .
There are various bisection methods derived from Vincent's theorem ; they are all presented and compared elsewhere. [ 19 ] Here the two most important of them are described, namely, the Vincent–Collins–Akritas (VCA) method and the Vincent–Alesina–Galuzzi (VAG) method.
The Vincent–Alesina–Galuzzi (VAG) method is the simplest of all methods derived from Vincent's theorem but has the most time consuming test (in line 1) to determine if a polynomial has roots in the interval of interest; this makes it the slowest of the methods presented in this article.
By contrast, the Vincent–Collins–Akritas (VCA) method is more complex but uses a simpler test (in line 1) than VAG . This along with certain improvements [ 16 ] have made VCA the fastest bisection method.
This was the first method developed to overcome the exponential nature of Vincent's original approach , and has had quite an interesting history as far as its name is concerned. This method, which isolates the real roots, using Descartes' rule of signs and Vincent's theorem , had been originally called modified Uspensky's algorithm by its inventors Collins and Akritas. [ 11 ] After going through names like "Collins–Akritas method" and "Descartes' method" (too confusing if ones considers Fourier's article [ 20 ] ), it was finally François Boulier, of Lille University, who gave it the name Vincent–Collins–Akritas (VCA) method, [ 14 ] p. 24, based on the fact that "Uspensky's method" does not exist [ 21 ] and neither does "Descartes' method". [ 22 ] The best implementation of this method is due to Rouillier and Zimmerman, [ 16 ] and to this date, it is the fastest bisection method. It has the same worst case complexity as Sturm's algorithm, but is almost always much faster. It has been implemented in Maple 's RootFinding package.
Here is how VCA( p , ( a , b )) works:
Below is a recursive presentation of the original algorithm VCA( p , ( a , b )).
VCA ( p , ( a , b ))
Input : A univariate, square-free polynomial p ( ub * x ) ∈ Z [ x ], p (0) ≠ 0 of degree deg( p ), and the open interval ( a , b ) = (0, ub ), where ub is an upper bound on the values of the positive roots of p ( x ). (The positive roots of p ( ub * x ) are all in the open interval (0, 1)). Output : A list of isolating intervals of the positive roots of p ( x )
Remark
Given the polynomial p orig ( x ) = x 3 − 7 x + 7 and considering as an upper bound [ 12 ] [ 13 ] on the values of the positive roots ub = 4 the arguments of the VCA method are: p ( x ) = 64 x 3 − 28 x + 7 and ( a , b ) = (0, 4) .
List of isolation intervals: { }.
List of pairs { p , I } to be processed:
Remove the first and process it.
List of isolation intervals: { }.
List of pairs { p , I } to be processed:
Remove the first and process it.
List of isolation intervals: { }.
List of pairs { p , I } to be processed:
Remove the first and process it.
List of isolation intervals: { }.
List of pairs { p , I } to be processed:
Remove the first and process it.
List of isolation intervals: {(1, 3 / 2 )}.
List of pairs { p , I } to be processed:
Remove the first and process it.
VCA(64 x 3 + 576 x 2 − 64 x − 64, ( 3 / 2 , 2))
List of isolation intervals: {(1, 3 / 2 ), ( 3 / 2 , 2)}.
List of pairs { p , I } to be processed:
Remove the first and process it.
List of isolation intervals: {(1, 3 / 2 ), ( 3 / 2 , 2)}.
List of pairs { p , I } to be processed: ∅ .
Finished.
Therefore, the two positive roots of the polynomial p ( x ) = x 3 − 7 x + 7 lie inside the isolation intervals (1, 3 / 2 ) and ( 3 / 2 , 2) }. Each root can be approximated by (for example) bisecting the isolation interval it lies in until the difference of the endpoints is smaller than 10 −6 ; following this approach, the roots turn out to be ρ 1 = 1.3569 and ρ 2 = 1.69202 .
This was developed last and is the simplest real root isolation method derived from Vincent's theorem .
Here is how VAG( p , ( a , b )) works:
Below is a recursive presentation of VAG( p , ( a , b )).
VAG ( p , ( a , b )) Input : A univariate, square-free polynomial p ( x ) ∈ Z [ x ], p (0) ≠ 0 of degree deg( p ) and the open interval ( a , b ) = (0, ub ), where ub is an upper bound on the values of the positive roots of p ( x ). Output : A list of isolating intervals of the positive roots of p ( x ).
Remarks
Given the polynomial p ( x ) = x 3 − 7 x + 7 and considering as an upper bound [ 12 ] [ 13 ] on the values of the positive roots ub = 4 the arguments of VAG are: p ( x ) = x 3 − 7 x + 7 and ( a , b ) = (0, 4).
List of isolation intervals: {}.
List of intervals to be processed: {(0, 2), (2, 4)}.
Remove the first and process it.
List of isolation intervals: {}.
List of intervals to be processed: {(0, 1), (1, 2), (2, 4)}.
Remove the first and process it.
List of isolation intervals: {}.
List of intervals to be processed: {(1, 2), (2, 4)}.
Remove the first and process it.
List of isolation intervals: {}.
List of intervals to be processed: {(1, 3 / 2 ), ( 3 / 2 , 2), (2, 4)}.
Remove the first and process it.
List of isolation intervals: {(1, 3 / 2 )}.
List of intervals to be processed: {( 3 / 2 , 2), (2, 4)}.
Remove the first and process it.
List of isolation intervals: {(1, 3 / 2 ), ( 3 / 2 , 2)}.
List of intervals to be processed: {(2, 4)}.
Remove the first and process it.
List of isolation intervals: {(1, 3 / 2 ), ( 3 / 2 , 2)}.
List of intervals to be processed: ∅.
Finished.
Therefore, the two positive roots of the polynomial p ( x ) = x 3 − 7 x + 7 lie inside the isolation intervals (1, 3 / 2 ) and ( 3 / 2 , 2) }. Each root can be approximated by (for example) bisecting the isolation interval it lies in until the difference of the endpoints is smaller than 10 −6 ; following this approach, the roots turn out to be ρ 1 = 1.3569 and ρ 2 = 1.69202 .
|
https://en.wikipedia.org/wiki/Vincent's_theorem
|
Vincent Marks (10 June 1930 – 6 November 2023) was an English pathologist and clinical biochemist known for his works on studying insulin and hypoglycemia . His contributions to medical science include simplifying low blood glucose testing, introducing insulin radioimmunoassay , and advancing diabetes research. Marks played an important role in high-profile medico-legal cases, notably providing expert testimony that helped acquit Danish-born British socialite Claus von Bülow in 1985, a case that was the basis of the Oscar-winning movie Reversal of Fortune (1990).
Marks was also a nutritionist who studied intestinal hormones and coined the term "muesli belt malnutrition", referring to parents feeding their children what is considered extremely healthy foods, but, in the process depriving them of essential fats .
Vincent Marks was born on 10 June 1930, in Harlesden , North West London, to Lewis and Rose (née Goldbaum) Marks, in a Jewish household. His parents ran a pub. [ 1 ] [ 2 ] Marks attended Tottenham Grammar School before going to study medicine on a scholarship at Brasenose College, Oxford , in 1948. [ 1 ] He completed his training and qualified as a doctor from the St Thomas' Hospital in London, in 1954. [ 1 ]
It is noted that his interest in medicine was driven in part by his mother's insistence that their childhood home be neat and tidy for the "doctor's visit", leading him and his brother to think highly of doctors and medicine as a profession. [ 2 ] During his time at Oxford, he was branded a communist after demanding that The Daily Worker , a newspaper mouthpiece of Communist Party of Great Britain , be introduced in the university's common rooms. He later joined the party, but left it in 1956 following the suppression of the Hungarian Uprising by the Soviet Union. In the 1980s he was a member of the Social Democratic Party (SDP). [ 2 ]
Marks began his career in the late 1950s at the National Hospital for Neurology and Neurosurgery , focusing on detecting low blood sugar and researching pancreatic and glucose-management hormones. Notably, he simplified the testing for low blood glucose using glucose oxidase , a method that foreshadowed modern diabetes diagnostics including colour-changing glucose strips. [ 1 ] Collaborating with South African medical researcher Ellis Samols , Marks introduced insulin radioimmunoassay into the UK, transforming insulin level measurement. The method had earlier been developed in the United States. [ 1 ] [ 2 ]
Marks moved to Surrey in 1962, working as a consultant chemical pathologist in Epsom . He co-authored the textbook Hypoglycaemia in 1965, and later became a professor of biochemistry at the University of Surrey in 1970. Marks established a laboratory for insulin testing and founded a master's course in clinical pathology . [ 1 ] His laboratory was among the first to offer insulin assays for testing across National Health Service (NHS) hospitals in the United Kingdom. [ 3 ] His research extended to monitoring drug levels in the blood and investigating hormones like melatonin and insulin-like growth factors. [ 1 ]
Marks also studied intestinal hormones and helped designate the gastric inhibitory polypeptide (GIP) as an obesity hormone. He also coined the term "muesli belt malnutrition", referring to parents feeding their children what is considered extremely healthy foods, but, in the process depriving them of essential fats . [ 4 ] [ 5 ] [ 6 ] [ 7 ] He explored this topic in his book Panic Nation (2006), which he co-authored with Stanley Feldman. [ 8 ]
Marks gained prominence in the medico-legal field, providing expert opinions in notable cases. His testimony in Danish-born British socialite Claus von Bülow 's 1985 trial challenged accusations of insulin injection and led to an acquittal. The case was made into a book and later into an Oscar winning movie, Reversal of Fortune . [ 1 ] [ 3 ] In his testimony, Marks said that the insulin-covered needle was most likely planted by someone who did not realize that insulin is cleaned off the needles once it is injected. [ 2 ] Marks also testified against Beverley Allitt in 1993, who used insulin to murder four children, and at the trial of Colin Norris in 2008. [ 2 ] [ 9 ] In 2007, he co-authored Insulin Murders detailing his involvement in high-profile medico-legal cases and reflecting on his career. [ 1 ] The book was among the first to talk about insulin as a murder weapon and documented more than 50 years of medical cases in which insulin had been used as a weapon. [ 2 ] [ 10 ] [ 11 ]
Marks retired in 1995 but remained active as an emeritus professor , contributing to research, publishing, and medico-legal work. He served as the president at the Association of Clinical Biochemists between 1989 and 1991, and as the vice president at the Royal College of Pathologists . [ 1 ] In a career spanning more than 50 years, he authored over 50 papers, contributed to more than 300 research publications, and authored almost 20 textbooks. His last book, The Forensic Aspects of Hypoglycaemia , was published in 2019. [ 3 ]
In 1957, Marks married sculptor and artist Averil Sherrard and had two children. Marks was known to have been an atheist and a humanist who was opposed to religion. [ 2 ] Along with his wife, he campaigned for various causes including saving a park in Guildford , Surrey , where they lived, from developers. [ 1 ] His brother John Marks was also a doctor, and the chair of the British Medical Association . [ 1 ]
Marks died on 6 November 2023, at the age of 93. [ 1 ]
|
https://en.wikipedia.org/wiki/Vincent_Marks
|
Vincenzo Balzani (born 15 November 1936 in Forlimpopoli , Italy) is an Italian chemist , now emeritus professor at the University of Bologna .
He has spent most of his professional life at the "Giacomo Ciamician" Department of Chemistry of the University of Bologna , becoming full professor in 1973. He has been appointed emeritus professor on November 1, 2010.
He taught courses on General and Inorganic Chemistry, Photochemistry , Supramolecular chemistry . He was chairman of the PhD course on Chemical Sciences from 2002 to 2007 and of the "laurea specialistica" in Photochemistry and Material Chemistry from 2004 to 2007. In the Academic Year 2008–2009, he founded at the University of Bologna an interdisciplinary course on Science and Society.
He has carried out an intense scientific activity in the fields of photochemistry , photophysics, electron transfer reactions, supramolecular chemistry , nanotechnology , machines and devices at the molecular level , photochemical conversion of solar energy . With its 650 publications cited more than 64,000 times in the scientific literature ( H index 119), [ 1 ] he is one of the best known chemists in the world. He is author or co-author of texts for researchers in English, some translated into Chinese and Japanese, which are currently adopted in universities in many countries. A few of the most significant texts are: Photochemistry of Coordination Compounds (1970), Supramolecular Photochemistry (1991), Molecular Devices and Machines - Concepts and Perspectives for the Nanoworld (2008), Energy for a Sustainable World (2011), Photochemistry and Photophysics: Concepts, Research, Applications (2014).
For many years, alongside scientific research, he has carried out an intense dissemination activity, also on the relationship between science and society and between science and peace, with particular reference to energy and resource issues. He is convinced that scientists have a great responsibility that derives from their knowledge and therefore it is their duty to actively contribute to solving the problems of humanity, particularly those connected to the current energy-climate crisis . Every year he holds dozens of seminars in primary or secondary schools and public conferences to illustrate to students and citizens the problems created by the use of fossil fuels : climate change , ecological unsustainability and the social unease deriving from growing inequalities . He believes that three transitions are necessary: from fossil fuels to renewable energies , from the linear economy to the circular economy and from consumerism to sobriety. On these themes he is coauthor of books much appreciated by students and teachers of secondary schools: Chimica (2000); Energia oggi e domani: Prospettive, sfide, speranze (2004); Energia per l'astronave Terra (2017), whose first edition (2007) won the Galileo award for scientific dissemination; Chimica! Leggere e scrivere il libro della natura (2012), English version: Chemistry! Reading and writing the book of Nature (2014); Energia, risorse, ambiente (2014); Le macchine molecolari (2018), finalist in the National Award for Scientific Dissemination Giancarlo Dosi.
Visiting professor: University of British Columbia, Vancouver, Canada 1972; Energy Research Center, Hebrew University of Jerusalem, Israel, 1979; University of Strasbourg, France, 1990; University of Leuven, Belgium, 1991; University of Bordeaux, France, 1994. Chairman: Gruppo Italiano di Fotochimica (1982–1986), European Photochemistry Association (1988–92); XII IUPAC Symposium on Photochemistry (1988); International Symposium on "Photochemistry and Photophysics of Coordination Compounds (since 1989, now Honorary Chairman); PhD course in Chemistry Sciences (2002–2007) e Laurea specialistica in Photochemistry and Chemistry of Materials (2004–2007), University of Bologna.
Director: Institute of Photochemistry and High Energy Radiations (FRAE), National Research Council (Italy), Bologna (1977–1988) and Center for the Photochemical Conversion of Solar Energy, University of Bologna (1981–1998). Member of the Scientific Committee of several international scientific journals. Member of the Scientific Committee of the Urban Plan for Sustainable Mobility (PUMS), of the Bologna metropolitan area (2008–).
Political activity: In 2009 he started the Science and Society interdisciplinary course at the University of Bologna with the aim of bridging the gap between University and City; it has long been hoping for the strengthening of similar initiatives for the cultural growth of the Metropolitan City. In 2014 he founded the Energia per l'Italia group,[2] formed by 22 professors and researchers of the university and of the most important research centers of Bologna, with the aim of offering the Government and local politicians guidelines to tackle the energy problem according to a broad perspective that includes scientific, social, environmental and cultural aspects.
Coordinator and editor: Supramolecular Photochemistry, NATO ASI Series n. 214, Reidel, Dordrecht (1987); Supramolecular Chemistry, NATO ASI Series n. 371, Reidel, Dordrecht (1992) (with L. De Cola); Guest Editor, Supramolecular Photochemistry, New J. Chem., N.7–8, vol. 20 (1996); Editor in chief of the Handbook on Electron Transfer in Chemistry, in five volumes, Wiley-VCH, Weinheim (2001); Topics in Current Chemistry, volumes 280 and 281 on Photochemistry and Photophysics of Coordination Compounds (2007).
He is a member of: Società Chimica Italiana ; Accademia delle Scienze di Bologna; Accademia delle Scienze di Torino; Società Nazionale di Scienze, Lettere ed Arti in Napoli; Accademia Nazionale delle Scienze detta dei XL ; Accademia Nazionale dei Lincei ; European Photochemistry Association; ChemPubSoc Europe ; Academia Europaea ; European Academy of Sciences, European Academy of Sciences and Arts ; American Association for the Advancement of Science .
Pacific West Coast Inorganic Lectureship, USA and Canada, 1985; Gold Medal "S. Cannizzaro", Italian Chemical Society, 1988; Doctorate "Honoris Causa", University of Fribourg (CH), 1989; Accademia dei Lincei Award in Chemistry, Italy, 1992; Ziegler-Natta Lecturer, Gesellschaft Deutscher Chemiker, Germany, 1994; Italgas European Prize for Research and Innovation, 1994; Centenary Lecturer, The Royal Chemical Society (U.K.), 1995; Porter Medal for Photochemistry, 2000; Prix Franco-Italien de la Société Française de Chimie, 2002; Grande Ufficiale dell’Ordine al Merito della Repubblica Italiana, 2006; Quilico Gold Metal, Organic Division, Italian Chemical Society, 2008; Honor Professor, East China University of Science and Technology of Shanghai, 2009; Blaise Pascal Medal, European Academy of Sciences, 2009; Rotary Club Galileo International Prize for scientific research, 2011; Nature Award for Mentoring in Science, 2013; Archiginnasio d’oro, Città di Bologna, 2016; Grand Prix de la Maison de la Chimie (France) 2016; Leonardo da Vinci Award, European Academy of Sciences, 2017; Nicholas J. Turro Award, Inter-American Photochemical Society, 2018; Cavaliere di Gran Croce della Repubblica Italiana per meriti scientifici, 2019; Primo Levi Award, Gesellschaft Deutscher Chemiker and Società Chimica Italiana, 2019; UNESCO-Russia Mendeleev Prize, 2021.
|
https://en.wikipedia.org/wiki/Vincenzo_Balzani
|
Vincenzo Barone (b. 8 November 1952, Ancona ) is an Italian chemist, active in the field of theoretical and computational chemistry . [ 1 ]
He became full professor of physical chemistry at the University of Naples in 1994, and professor of theoretical and computational chemistry at the Scuola Normale Superiore di Pisa in 2009. [ 2 ]
He was elected director of the Scuola Normale in 2016 [ 3 ] but resigned in 2019 after a clash with the body of professors, that would have resulted in a no confidence vote. [ 4 ]
He has been chairperson of the Italian Chemical Society (SCI) from 2011 to 2013 and is also a member of the International Academy of Quantum Molecular Science (IAQMS), the European Academy of Sciences, and a fellow of the Royal Society of Chemistry (RSC).
This biographical article about a chemist is a stub . You can help Wikipedia by expanding it .
This Italian scientist article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vincenzo_Barone
|
Vincenzo Mollame ( Naples , Kingdom of the Two Sicilies 4 July 1848 – Catania , 23 June 1912) was an Italian mathematician.
Mollame was privately tutored by Achille Sanni and then studied Mathematics at the University of Naples Federico II . After obtaining his degree, he became a high-school teacher, first at Benevento and after that at Naples, starting in 1878. He became a professor at the University of Catania in 1880 and remained there for the rest of his career, having retired in 1911, a few months before his death. [ 1 ]
His research area was the theory of equations and he proved in 1890 that when a cubic polynomial with rational coefficients has three real roots but it is irreducible in Q [ x ] (the so-called casus irreducibilis ), then the roots cannot be expressed from the coefficients using real radicals alone, that is, complex non-real numbers must be involved if one expresses the roots from the coefficients using radicals, [ 2 ] probably unaware of the fact that Pierre Wantzel had already proved it in 1843. Molleme's research activity stopped in 1896, due to health problems.
Mollame was the author of a textbook on determinants . [ 3 ]
This article about an Italian mathematician is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vincenzo_Mollame
|
Vinclozolin (trade names Ronilan , Curalan , Vorlan , Touche ) is a common dicarboximide fungicide used to control diseases, such as blights, rots and molds in vineyards, and on fruits and vegetables such as raspberries, lettuce, kiwi, snap beans, and onions. It is also used on turf on golf courses. [ 1 ] Two common fungi that vinclozolin is used to protect crops against are Botrytis cinerea and Sclerotinia sclerotiorum . [ 2 ] First registered in 1981, vinclozolin is widely used but its overall application has declined. As a pesticide, vinclozolin is regulated by the United States Environmental Protection Agency (U.S. EPA). In addition to these restrictions within the United States, as of 2006 the use of this pesticide was banned in several countries, including Denmark, Finland, Norway, and Sweden. [ 3 ] It has gone through a series of tests and regulations in order to evaluate the risks and hazards to the environment and animals. Among the research, a main finding is that vinclozolin has been shown to be an endocrine disruptor with antiandrogenic effects. [ citation needed ]
Vinclozolin is manufactured by the chemical company BASF and has been registered for use in the United States since 1981. The following is a compilation of data indicating the national use of vinclozolin per crop (lbs AI/yr) in 1987: apricots, 124; cherries, 3,301; green beans, 13,437; lettuce, 24,779; nectarines, 1,449; onions, 829; peaches, 15,203; plums, 163; raspberries, 3,247; and strawberries, 41,006. [ 4 ] In 1997, two applications totaling 285 pounds each, were applied to kiwifruit in California to prevent the gray mold and soft rot caused by Botrytis cinerea. [ 5 ] In general, the United States has seen an overall decline in the national use of vinclozolin. In 1992, a total of approximately 135,000 pounds were used. However, in 1997 this number dropped to 122,000 and in 2002 it was down to 55,000 pounds. [ 6 ]
The following chemical reactions are used to make vinclozolin: [ 7 ] One method combines methyl vinyl ketone , sodium cyanide , 3,5-dichloroaniline , and phosgene . This process involves formation of the cyanohydrin , followed by hydrolysis of the nitrile. [ 4 ] Vinclozolin is also prepared by the reaction of 3,5-dichlorophenyl isocyanate with an alkyl ester of 2-hydroxy-2-methylbut-3-enoic acid. Ring closure is achieved at elevated temperature. [ 4 ]
Vinclozolin is then formulated into a dry flowable or extruded granular. It can be applied by through the air (aerial), through irrigation systems (chemigation), or by ground equipment. Vinclozolin is also applied to some plants, such as decorative flowers, as a dip treatment where the plant is dipped into the fungicide solution and then dried. It is also common to spray a vinclozolin solution using thermal foggers in greenhouses. [ 1 ]
All pesticides sold or distributed in the United States must be registered by U.S. EPA. Pesticides that were first registered before November 1, 1984, were reregistered so that they can be retested using the now more advanced methods. Because vinclozolin was released in 1981, it has gone through both preliminary and a subsequent reregistration. [ 1 ] Below is a list of the history of regulations for vinclozolin:
The U.S. EPA has examined dietary (food and water), non-dietary, and occupational exposure to vinclozolin or its metabolites. In general, fungicides have been shown to circulate through the water and air, and it possible for them to end up on untreated foods after application. Consumers alone cannot easily reduce their exposure because fungicides are not removed from produce that is washed with tap water. [ 9 ] A key example of exposure to vinclozolin is through wine grapes which is considered to account for about 2% of total vinclozolin exposure. [ 10 ] It has been determined that people may be exposed to residues of vinclozolin and its metabolites containing the 3,5-dichloroaniline moiety (3,5-DCA) through diet, and thus tolerance limits have been established for each crop. [ 1 ] Although vinclozolin is not registered for use by homeowners, it is still possible for people to come into contact with the fungicide and its residues. For example, golfers playing on treated golf courses, and families playing on sod which has been previously treated may be at risk for exposure. [ 1 ] Occupationally, workers can be exposed to vinclozolin while doing activities such as loading and mixing. [ 1 ]
As part of the reregistration process, the U.S. EPA reviewed all toxicity studies on vinclozolin. The main effect induced by vinclozolin is related to its antiandrogenic activity and its ability to act as a competitive antagonist of the androgen receptor . [ citation needed ] Vinclozolin can mimic male hormones, like testosterone, and bind to androgen receptors, while not necessarily activating those receptors properly. There is evidence that vinclozolin itself binds weakly to the androgen receptor but that at least two of its metabolites are responsible for much of the antiandrogenic activity. [ 8 ] When male rats were given low dose levels (>3 mg/kg/day) of vinclozolin, effects such as decreased prostate weight, weight reduction in sex organs, nipple development, and decreased ano-genital distance were noted. At higher dose levels, male sex organ weight decreased further, and sex organ malformations were seen, such as reduced penis size, the appearance of vaginal pouches and hypospadias . [ 8 ] In the rat model, it has been shown that the antiandrogenic effects of vinclozolin are most prominent during the developmental stages. [ 8 ] In utero, this sensitive period of fetal development occurs between gestation days 16-17. [ 11 ] Embryonic exposure to vinclozolin can influence sexual differentiation, gonadal formation, and reproductive functions. [ 12 ] In bird models, vinclozolin and its metabolites were shown in vitro and in vivo to inhibit androgen receptor binding and gene expression. Vinclozolin caused reduced egg laying, reduced fertility rate, and a reduction in successful hatches. [ 1 ] Androgens also play a role in puberty, and it has been shown an antiandrogen like vinclozolin can delay pubertal maturation. [ 11 ] Antiandrogenic toxins are also known to alter sexual differentiation and reproduction in the rabbit model. Male rabbits exposed to vinclozolin in utero or during infancy did not show a sexual interest in females or did not ejaculate. [ 11 ] Since the androgen receptor is widely conserved across species lines, antiandrogenic effects would be expected in humans. [ 8 ] In vertebrates, vinclozolin also acts as a neuroendocrine disruptor, affecting behaviors tied to locomotion, cognition, and anxiety. [ 13 ]
In rats, vinclozolin has been shown to affect other steroid hormone receptors , such as those of progesterone and estrogen . Just as with androgens, the timing of the exposure to vinclozolin determines the magnitude of the effects related to these hormones. In a study with rats, in vitro research showed the ability of two vinclozolin metabolites to bind to the progesterone receptor. However, the same study in vivo using adult male rats showed no effects. [ 14 ] When mice experienced vinclozolin exposure in utero, male offspring exhibited up-regulated estrogen receptor and up-regulated progesterone receptor. In females, vinclozolin down-regulated expression of estrogen receptors and up-regulated progesterone receptor expression. This result causes virilization and the feminization of males and masculinization of females. [ 14 ]
In rats, vinclozolin has been demonstrated to have trangenerational effects, meaning that not only is the initial animal affected, but effects are also seen in subsequent generations. One study demonstrated that vinclozolin impaired male fertility not only in the first generation that was exposed in utero, but in males born for three generations and beyond. [ 15 ] Furthermore, when affected males were mated with normal females, some of the offspring were sterile and some had reduced fertility. After three generations, male offspring continued to show low sperm count, prostate disease and high rates of testicular cell apoptosis . [ 15 ] [ 16 ] Other studies conducted experiments where rat embryos were exposed to vinclozolin during sex determination. F1 (first generation) vinclozolin treated males were bred with F1 vinclozolin treated females. This pattern continued for three generations. The initial F0 mother was the only subject that was directly exposed to doses of vinclozolin. F1-F4 generation males all showed an increase in the prevalence of tumors, prostate disease, kidney disease, test abnormalities and immune failures when compared to the control group. F1-F4 females also showed an increased incidence of tumors and kidney disease. [ 12 ] Furthermore, transgenerationally transmitted changes in mate preference and anxiety behavior have also been observed in rats following exposure to vinclozolin. [ 17 ] It has been reported that these transgenerational reports correlate with epigenetic changes, specifically, an alteration in DNA methylation in the male germ line. [ 17 ] However, these transgenerational changes have not been successfully reproduced by BASF scientists, the manufacturer of vinclozolin [ 18 ]
The U.S. EPA has classified vinclozolin as a possible human carcinogen. Vinclozolin induces an increase in leydig cell tumors in rats. The 3,5-DCA metabolite is thought to possess a mode of tumor induction based on its similarity to p-choroaniline. [ 8 ]
Laboratory test indicate that vinclozolin easily breaks down and dissipates in the environment with the help of microbes. Of its several metabolites 3,5-dichloroaniline resists further degradation. [ 8 ] In terrestrial field dissipation studies conducted in various states, vinclozolin dissipated with a half-life between 34 and 94 days. Half-lives including residues can reach up to 1,000 days. Residues may accumulate and be available for future crop uptake. [ 8 ]
Since the phase-out of vinclozolin, farmers are faced with fewer options to control gray and white mold. The New York State Agricultural Experiment Station has carried out efficacy trials for gray and white mold. Research has shown potential alternatives to vinclozolin. Trifloxystrobin (Flint), iprodione (Rovral), and cyprodinil plus fludioxonil (Switch) control gray mold. Thiophanate-methyl (Topsin M) was as effective as vinclozolin in controlling white molds. Switch was the most promising alternative to vinclozolin for controlling both gray and white mold on pods and for increasing marketable yield. [ 19 ]
|
https://en.wikipedia.org/wiki/Vinclozolin
|
line segment from A to B
repeated 0.1428571428571428571...
complex conjugate
boolean NOT (A AND B)
radical ab + 2
bracketing function
Vinculum usage
A vinculum (from Latin vinculum ' fetter, chain, tie ' ) is a horizontal line used in mathematical notation for various purposes. It may be placed as an overline or underline above or below a mathematical expression to group the expression's elements. Historically, vincula were extensively used to group items together, especially in written mathematics, but in modern mathematics its use for this purpose has almost entirely been replaced by the use of parentheses . [ 1 ] It was also used to mark Roman numerals whose values are multiplied by 1,000. [ 2 ] Today, however, the common usage of a vinculum to indicate the repetend of a repeating decimal [ 3 ] [ 4 ] is a significant exception and reflects the original usage.
The vinculum, in its general use, was introduced by Frans van Schooten in 1646 as he edited the works of François Viète (who had himself not used this notation). However, earlier versions, such as using an underline as Chuquet did in 1484, or in limited form as Descartes did in 1637, using it only in relation to the radical sign, were common. [ 5 ]
A vinculum can indicate a line segment where A and B are the endpoints:
A vinculum can indicate the repetend of a repeating decimal value:
A vinculum can indicate the complex conjugate of a complex number :
Logarithm of a number less than 1 can conveniently be represented using vinculum:
In Boolean algebra , a vinculum may be used to represent the operation of inversion (also known as the NOT function):
meaning that Y is false only when both A and B are both true - or by extension, Y is true when either A or B is false.
Similarly, it is used to show the repeating terms in a periodic continued fraction . Quadratic irrational numbers are the only numbers that have these.
Formerly its main use was as a notation to indicate a group (a bracketing device serving the same function as parentheses):
meaning to add b and c first and then subtract the result from a , which would be written more commonly today as a − ( b + c ) . Parentheses, used for grouping, are only rarely found in the mathematical literature before the eighteenth century. The vinculum was used extensively, usually as an overline, but Chuquet in 1484 used the underline version. [ 6 ]
In India, the use of this notation is still tested in primary school. [ 7 ]
The vinculum is used as part of the notation of a radical to indicate the radicand whose root is being indicated. In the following, the quantity a b + 2 {\displaystyle ab+2} is the whole radicand, and thus has a vinculum over it:
In 1637 Descartes was the first to unite the German radical sign √ with the vinculum to create the radical symbol in common use today. [ 8 ]
The symbol used to indicate a vinculum need not be a line segment (overline or underline); sometimes braces can be used (pointing either up or down). [ 9 ]
In LaTeX , a text <text> can be overlined with $\overline{\mbox{<text>}}$ . The inner \mbox{} is necessary to
override the math-mode (here invoked by the dollar signs) which the \overline{} demands.
|
https://en.wikipedia.org/wiki/Vinculum_(symbol)
|
The Vindhyan Ecology and Natural History Foundation (VENHF) is a registered non-profit organisation (2012) with its headquarter in Mirzapur , Uttar Pradesh, India, working for the protection and conservation of the nature, natural resources and rights of the nature dependent communities in the ecologically fragile landscape of Vindhya Range in India. Vindhya Bachao Abhiyan is the flagship campaign of the organization.
Vindhya Bachao Abhiyan ( Hindi pronunciation: [viŋd̪ʱyaː batʃaːoː] English meaning: Save Vindhya Campaign ) is the flagship program of VENHF which works towards environmental equity and bringing ecological justice through research-based environmental litigation, strengthening grass-root environmental movements, supporting institution of local governance and protecting the rights of nature dependent indigenous communities. [ 1 ]
In the year 2017, Vindhyan Ecology and Natural History Foundation in association with WWF-India published a sign-based study on Sloth Bears in Mirzapur, which identified five forest ranges as critical wildlife habitats. The study estimated an area of 430 km 2 (170 sq mi). of core Sloth bear habitat and a total of 1,110 km 2 (430 sq mi) of Reserve Forests area which may be protected as wildlife habitat. [ 2 ] [ 3 ] This study was followed by a camera trap survey in three forest ranges of Mirzapur forest division – Marihan, Sukrit and Chunar – between May 2018 and July 2018. A total of 15 camera traps were deployed at 50 different locations selected randomly covering different habitat types and at locations likely to be used by animals. The said study was conducted in collaboration with Mirzapur Forest Department and was supported by the David Shepherd Wildlife Foundation and Wildlife Trust of India . The study recorded 24 wildlife species, several of which were recorded for the first time in the district. The study also recorded Asiatic wildcat for the first time in Uttar Pradesh. A proposal for Sloth Bear Conservation Reserve was made based on this study. [ 4 ] [ 5 ] [ 6 ]
VENHF under the banner of Vindhya Bachao opposed the 1320 MW Coal Based Thermal Power Station in Mirzapur proposed by Ms Welspun Energy U.P. Private Limited since the year 2013. In a site visit report published by Vindhya Bachao in September 2013 it was claimed that the project proponent concealed the information on the presence of forests and several Schedule I species under Wildlife Protection Act, 1972 in the EIA Report is submitted to the Ministry of Environment and Forests (India) . In 2013 it was reported by Down to Earth that the plan was "mired in controversy following allegations that the company concealed information about the presence of forestland and endangered wildlife at the project site. The farmers in the region have also been protesting against the project, alleging the company bought land for the project by cheating them." [ 8 ] Debadityo Sinha, founder of VENHF in his articles claimed that the project will be a threat to river Ganga and the upper Khajuri Reservoir for drinking and irrigation. It was apprehended that the project if comes into existence will also threaten a historic waterfall of Mirzapur known as Wyndham Fall and will also jeopardise the drinking water supply of the newly established Rajiv Gandhi South campus of Banaras Hindu University . He also made an allegation that the Public Hearing process for the project was greatly compromised and local people were prohibited from entering the public hearing premises. [ 9 ] [ 10 ] In a research paper published by VENHF in an international open-source scientific journal Present Environment and Sustainable Development of Walter de Gruyter in its October 2015 edition, the land use land cover map of the project site was submitted by the company is contradicted. [ 11 ]
The National Green Tribunal , New Delhi quashed the Environmental Clearance granted to the project in its judgment dated 21 December 2016 in a matter filed by Vindhya Bachao members Debadityo Sinha, Shiva Kumar Upadhyaya and Mukesh Kumar. [ 12 ]
Vindhya Bachao Website has a separate portal Mirzapur Thermal Power Plant Resource Page with extensive information resources on the project, including site visit reports, minutes of MoEF meetings discussing the project, accounts of protests, and documents submitted by Welspun Energy. [ 13 ]
Vindhya Bachao Abhiyan exposed the illegalities in environment clearance and forest clearance surrounding the controversial Kanhar Dam Project in Sonbhadra district , Uttar Pradesh on Kanhar River . The information collected by Vindhya Bachao using the Right to Information Act, 2005 was the basis of challenging the construction of the dam. [ 14 ] [ 15 ] Members of Vindhya Bachao and the People's Union for Civil Liberties challenged the project in National Green Tribunal , New Delhi. [ 16 ] [ 17 ] The construction of the dam was thereafter stayed by the National Green Tribunal in December 2014. [ 18 ] [ 19 ] [ 20 ]
The Chief Secretary of the Chhattisgarh government in April 2015 took note of the irregularities highlighted by Vindhya Bachao Abhiyan and asked the Uttar Pradesh government to stop the construction until the survey and compensation for the affected villages are completed. [ 21 ]
The National Green Tribunal passed its final judgement on 7 May 2015 staying any new construction to be undertaken but allowing the construction already underway. The court also formed a high-level committee under the chairmanship of Principal Chief Conservator of Forests , Uttar Pradesh to report on the directions issued in the judgment. [ 22 ] [ 23 ] [ 24 ] The members filed a review petition against the judgment passed by the tribunal, following which the court gave a direction on 7 July 2015 that "This Application is disposed of with an observation that upon the filing of the report by High Power Committee; constituted under the Judgment of the Tribunal, the Tribunal will pass further directions after hearing the parties regarding all matters as mentioned in the Judgment including Environmental Clearance and Forest Clearance." [ 25 ] The Tribunal through its order dated 21 September 2015 issued a show-cause notice to the Principal Chief Conservator of Forests, Uttar Pradesh for not submitting the report within the deadline. [ 26 ] One of the articles published by Vindhya Bachao Abhiyan on its portal in December 2015 states that the petitioners are unsatisfied with the report submitted by the committee and alleged that the State government is violating the judgment passed by the tribunal in its 7 May 2015 order. [ 27 ]
Vindhya Bachao Website has a separate portal Kanhar Dam Resource Page for sharing the latest updates on the Kanhar Dam cases. [ 28 ]
Vindhyan Ecology and Natural History Foundation challenged the declaration of 1 km Eco-Sensitive Zone around Kaimoor Wildlife Sanctuary in the districts of Mirzapur and Sonbhadra in Uttar Pradesh at the National Green Tribunal , New Delhi The petition said the Ministry of Environment, Forest and Climate Change should have taken into consideration the ecologically sensitive areas, water bodies, forests wildlife habitats and other eco-sensitive areas based site selection and should not apply the uniform distance. The tribunal dismissed the plea, which was also upheld by the Supreme court of India. [ 29 ] [ 30 ]
Members of Vindhya Bachao along with Bharat Jhunjhunwala and other environmentalists wrote to the Ministry of Water Resources (India) , World Bank and other states of India on the ecological and cultural impacts of reviving the National Waterway 1 on river Ganges . [ 31 ] [ 32 ]
VENHF sent a representation to the Ministry of Environment, Forests and Climate Change , Government of India on the proposed draft notification declaring 1 km Eco-sensitive zone around the Kaimoor Wildlife Sanctuary . The representation was endorsed by renowned wildlife experts Mike Pandey and Asad Rahmani. [ 33 ]
VENHF hosts an information portal called Saving the Habitat which shares information on the wildlife of Mirzapur. In December 2014 the organization sent a representation to the Government of India demanding some areas of Mirzapur Forest Division to be declared as Protected areas of India. [ 34 ]
In June 2015 VENHF reviewed the Draft Notification on Emission Standards for Thermal Power Plants in India and sent a representation to Government of India. [ 35 ]
In October 2015 VENHF sent a representation on the Draft Environment Laws (Amendment) Bill, 2015 to Government of India in which it claimed that the bill will dilute the Environment Protection Act, 1986 . [ 36 ]
Debadityo Sinha, the founder of Vindhyan Ecology and Natural History Foundation was awarded the Sanctuary Wildlife Service Award on 20 December 2019 by Sanctuary Asia , DSP Mutual Fund and IndusInd Bank . [ 37 ] [ 38 ] [ 39 ] [ 40 ] [ 41 ] [ 42 ] [ excessive citations ]
Mr Firoz Ahmad, Forestry and Remote Sensing expert associated with VENHF got the 'National E-Governance Award 2019-20' from 'Department of Administrative Reforms and Public Grievances, Ministry of Personnel, Public Grievances and Pensions , Government of India' during 23rd National Conference on E-Governance' held in Mumbai on 7–8 February 2020. [ 43 ] [ 44 ] [ 45 ]
VENHF is partner of EKOenergy [ 46 ] [ 47 ] and Global Call for Climate Action [ 48 ] The organization has published studies in association with WWF-India , Wildlife Trust of India , Earth Matters Foundation, David Shepherd Wildlife Foundation, Government of Uttar Pradesh and Government of Arunachal Pradesh . [ 49 ] [ 50 ] [ 51 ] [ 52 ]
|
https://en.wikipedia.org/wiki/Vindhyan_Ecology_and_Natural_History_Foundation
|
Vineet Bafna is an Indian bioinformatician and professor of computer science and director of bioinformatics program at University of California, San Diego . [ 4 ] [ 5 ] He was elected a Fellow of the International Society for Computational Biology (ISCB) in 2019 for outstanding contributions to the fields of computational biology and bioinformatics . [ 1 ] He has also been a member of the Research in Computational Molecular Biology (RECOMB) conference steering committee. [ 6 ]
Bafna received his Ph.D. in computer science from Pennsylvania State University in 1994 under supervision of Pavel Pevzner , and was a post-doctoral researcher at Center for Discrete Mathematics and Theoretical Computer Science . [ 3 ] [ 7 ] From 1999 to 2002, he worked at Celera Genomics , ultimately as director of informatics research, where he was part of the team (along with J. Craig Venter and Gene Myers ) who assembled and annotated the Human Genome in 2001. [ 8 ] [ 9 ] He was also a member of the team that published the first diploid (six-billion-letter) genome of an individual human in 2007. [ 10 ]
He joined the faculty at the University of California, San Diego in the Department of Computer Science and Engineering in 2003 where he now serves as professor and director of Bioinformatics program. [ 4 ] [ 5 ]
|
https://en.wikipedia.org/wiki/Vineet_Bafna
|
Vinegar syndrome , also known as acetic acid syndrome , [ 1 ] is a condition created by the deacetylation of cellulose acetates (usually cellulose diacetate ) and cellulose triacetate . [ 2 ] This deacetylation produces acetic acid , giving off a vinegar odor that gives the condition its name; as well, objects undergoing vinegar syndrome often shrink, become brittle , and form crystals on their surface due to the migration of plasticizers . [ 3 ] Vinegar syndrome widely affects cellulose acetate film as used in photography. [ 4 ] It has also been observed to affect older magnetic tape , where cellulose acetate is used as a base, as well as polarizers used in liquid-crystal display units and everyday plastics such as containers and tableware . [ 5 ] [ 6 ] [ 7 ] [ 8 ] High temperatures and fluctuations in relative humidity have been observed to accelerate the process. [ 3 ] The process is autocatalytic , and the damage done by vinegar syndrome is irreversible. [ 3 ] [ 4 ]
The first instance of cellulose triacetate degradation was reported to the Eastman Kodak Company within a decade of its introduction in 1948. The first report came from the Government of India, whose film materials were stored in hot, humid conditions. It was followed by further reports of degradation from collections stored in similar conditions. These observations resulted in continuing studies in the Kodak laboratories during the 1960s. Film degradation can only be delayed by storage in dry and cold conditions. It was initially thought that storage under recommended conditions might delay decay by 450 years, but some films are developing vinegar syndrome after just 70 years of cold dry storage. [ 4 ]
The film preservationist Harold Brown is credited with coining the phrase "vinegar syndrome". [ 9 ]
In acetate film, acetyl (CH 3 CO) groups are attached to long molecular chains of cellulose . With exposure to moisture, heat, or acids, these acetyl groups break from their molecular bonds and acetic acid is released. [ 10 ] While the acid is initially released inside the plastic, it gradually diffuses to the surface, causing a characteristic vinegary smell.
The decay process follows this pattern:
A testing product developed by the Image Permanence Institute , A-D, or "acid-detection" indicator strips change color from blue through shades of green to yellow with increasing exposure to acid. According to the test User's Guide, they were "created to aid in the preservation of collections of photographic film, including sheet and roll films , cinema film, and microfilm. They provide a nondestructive method of determining the extent of vinegar syndrome in film collections." [ 13 ] [ 8 ] These tools can be used to determine the extent of damage to a film collection and which steps should be taken to prolong their usability. [ 8 ]
|
https://en.wikipedia.org/wiki/Vinegar_syndrome
|
The Vintage Computer Festival ( VCF ) is an international event celebrating the history of computing . It is held annually in various locations around the United States and various countries internationally. It was founded by Sellam Ismail [ 1 ] in 1997.
The Vintage Computer Festival promotes the preservation of "obsolete" computers by offering the public a chance to experience the technologies, people and stories that embody the remarkable tale of the computer revolution. VCF events include hands-on exhibit halls, VIP keynote speeches, consignment, technical classes, and other attractions depending on venue. It is consequently one of the premiere physical markets for antique computer hardware.
The Vintage Computer Federation runs VCF East [ 2 ] ( Wall Township, New Jersey ), VCF Pacific Northwest [ 3 ] ( Seattle , Washington ), and VCF West [ 4 ] ( Mountain View, California ). Independent editions include, VCF SoCal (Southern California / Orange, California), VCF Midwest [ 5 ] (metro Chicago , Illinois ), VCF Southwest ( Dallas-Fort Worth Metroplex ) VCF Europa ( Munich and Berlin , Germany ; Vintage Computer Festival Zürich, Switzerland ), [ 6 ] Vintage Computer Festival GB, [ 7 ] and VCF Southeast [ 8 ] ( Atlanta , Georgia ).
|
https://en.wikipedia.org/wiki/Vintage_Computer_Festival
|
A vintage computer is an older computer system that is largely regarded as obsolete.
The personal computer has been around since around 1971, [ 1 ] and in that time technological advancement means existing models get replaced every few years. Nevertheless, these otherwise useless computers have spawned a sub-culture of vintage computer collectors who often spend large sums for the rarest examples, not only to display but functionally restore. [ 2 ] [ 3 ] This involves active software development and adaptation to modern uses . This often includes homebrew developers and hackers who add on, update and create hybrid composites from new and old computers for uses they were otherwise never intended. [ 4 ] [ 5 ]
Ethernet interfaces have been designed for many vintage 8-bit machines to allow limited connectivity to the Internet , where users can access discussion groups, bulletin boards , and software databases. [ 6 ] Most of this hobby centers on computers made after 1960 , though some collectors also specialize in older computers . [ 7 ]
The Vintage Computer Festival , an event held by the Vintage Computer Federation for the exhibition and celebration of vintage computers, has been held annually since 1997 and has expanded internationally. [ 8 ]
Micro Instrumentation and Telemetry Systems (MITS) produced the Altair 8800 in 1975. According to Harry Garland, the Altair 8800 was the product that catalyzed the microcomputer revolution of the 1970s . [ 9 ]
IMSAI produced a machine similar to the Altair 8800. It was introduced in 1975, first as a kit, and later as an assembled system. [ 10 ] The list price was $591 (equivalent to $3,500 in 2024) for a kit, and $931 (equivalent to $5,440 in 2024) assembled. [ 11 ]
Processor Technology produced the Sol-20 . This was one of the first machines to have a case that included a keyboard ; a design feature copied by many of later "home computers".
Southwest Technical Products Corporation ( SWTPC ) produced the 8-bit SWTPC 6800 and later the 16-bit SWTPC 6809 kits that employed the Motorola 68xx series microprocessors.
The earliest Apple Inc. personal computers, using the MOS Technology 6502 processors, are among some of the most collectible. They are relatively easy to maintain in an operational state thanks to Apple's use of readily available off-the-shelf parts.
Perhaps because of its friendly design and first commercially successful graphical user interface as well as its enduring Finder application that persists on the most current Macs, the Macintosh is one of the most collected and used vintage computers. With dozens of websites around the world, old Macintosh hardware and software are input into daily use. The Macintosh had a strong presence in many early computer labs, creating a nostalgia factor for former students who recall their first computing experiences.
|
https://en.wikipedia.org/wiki/Vintage_computer
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.