text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
A vintage design is a design of another era that holds important and recognizable value. [ 1 ] Vintage styles can be applied to interior design , decor, clothing and other areas. Vintage design is popular [ 2 ] and vintage items have risen in price. Outlets of vintage design have shifted from thrift store to shabby chic stores. [ 3 ]
There is debate over what determines if an item is vintage . Some rely on the definition of anything old and of value. The most widely accepted definition used by antique and vintage professionals is anything older than 40 (and less than 100) years old. [ 4 ]
The terms vintage, retro , and antique are often used interchangeably and have some overlap, but each has a distinct meaning. Retro refers to a style that is iconic of a previous era. Vintage typically describes an item made from high-quality materials or craftsmanship, representative of a specific time period or artist, and generally between 40 and 100 years old.
Antique, on the other hand, refers to items from earlier periods, specifically those that are at least 100 years old. [ 5 ] [ 1 ] [ 6 ] A related term, antiquity , refers to objects from ancient times or past eras. [ 7 ]
The word "vintage" originates from Late Middle English, derived from Old French and Latin.
Vintage items spark interest in many. The United States Department of Labor tells us that, "Design and fashion trends play an important part in the production of furniture. The integrated design of the article for both esthetic and functional qualities is also a major part of the process of manufacturing furniture." [ 8 ]
The popularity of vintage design and vintage inspired items can be seen through media. In 2004 designer Nicolas Ghesquière created a line for Balenciaga which called back to older collections. [ 9 ] Tom Ford 's collection for her also uses references to the past. Vintage design can also be seen in ads which promote vintage inspired clothing. [ 9 ]
There are several reasons for vintage design's popularity. Some claim the phenomenon is due to the rarity and classic value of the items. [ 9 ] Others state the reason to be a mixture of peoples' nostalgia creating a positive emotional appeal toward a past era or their childhood, consumers' environmental concerns, an appreciation of past styles and craftsmanship, and other experience. [ 10 ] [ 11 ]
Vintage design contains various subcategories reflecting the vast diversity of aesthetics that make up traditional and 20th century design styles. [ 2 ] [ 12 ]
Art Nouveau is a style containing curved lines, flowers and other plants, contrasting colors, ornate colors, young women, and intricate details. It was created at the end of the 1800s and gained popularity at the start of the 1910s. [ 2 ]
Art Deco was created to intentionally embrace a clean, modern, and man-made look, developed and popular from the 1920s and reaching its peak in the 1930s. This style features mostly geometric shapes, symmetrical patterns, and idealized human figures. [ 2 ]
Mid-century modern style makes use of straight, clear lines, curved objects, wood tones, thin supporting, and oversized objects. It is meant to call back to the mid-20th century. [ 2 ]
Referring to the period roughly corresponding to 1940–1963, the Atomic Age includes elements of space exploration, scientific discovery, and futurism , creating an idea of an "optimistic, modern world". [ 2 ] Atomic Age design became popular and instantly recognizable, with a use of atomic motifs and space age symbols.
International Style design contains broad block letters in fonts such as Helvetica (see Swiss Style for further information on the typographic style) and sleek, modern lines invoking Mies-ian simplicity and a cosmopolitan aire. [ 2 ]
The styles of the 1970s are incredibly popular in vintage design, recalling the aesthetics of hippies and other counterculture groups of the era. Use of natural color combinations such as the well-known ' harvest gold , avocado green , and burnt orange ' was widespread, as were psychedelic colors and designs such as paisley . [ 2 ]
The punk counterculture style of the late 1970s and 1980s is reused today. It contains harsh lines, clashing colors, juxtaposition, and 'edgy' imagery to create an anti-authoritarian aesthetic. [ 2 ]
Postmodernism as a style incorporates bold colors and abstract geometric motifs with intentionally humorous references to past architectural and design traditions, popular in the 1980s and 1990s. Whereas 'less is more' was a tenet of modernism, postmodern architect Robert Venturi quipped 'less is a bore'. [ 13 ] Postmodernism has heavily influenced the vaporwave aesthetic. [ 2 ]
This design -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vintage_design
|
VinyLoop is a proprietary physical plastic recycling process for polyvinyl chloride (PVC). It is based on dissolution in order to separate PVC from other materials or impurities. [ 1 ]
A major factor of the recycling of polyvinyl chloride waste is the purity of the recycled material. In most composite materials, PVC is among several other materials, such as wood, metal, or textile. To make new products from the recycled PVC, it is necessary to separate it from other materials. Traditional recycling methods are not sufficient and expensive because this separation has to be done manually and product by product. [ 2 ]
VinyLoop is a recycling process which separates PVC from other materials through a process of dissolution, filtration and separation of contamination. A solvent is used in a closed loop to elute PVC from the waste. This makes it possible to recycle composite structure PVC waste, which would normally be incinerated or put in a landfill site. [ 3 ]
The process consists of the following steps:
Possible products made from recycled PVC are coatings for waterproofing membranes, pond foils, shoe soles, hoses, diaphragms tunnel, coated fabrics, and PVC sheets. It is an attempt to solve the recycling waste problem of PVC products. [ 6 ] [ 7 ] [ 8 ]
VinyLoop-based recycled PVC's primary energy demand is 46 percent lower than conventional produced PVC. The global warming potential is 39 percent lower. [ 9 ]
The VinyLoop process has been selected to recycle membranes of different temporary venues of the London Olympics 2012 . Roofing covers of the Olympic Stadium , the Water Polo Arena , the London Aquatics Centre and the Royal Artillery Barracks will be deconstructed and a part will be recycled in the VinyLoop process. [ 10 ]
Since the process could not remove low molecular weight phthalate plasticizers during recycling, tightening EU regulations meant the recycling plant based in Ferrara, Italy has closed as of 28 June 2018. [ 11 ]
|
https://en.wikipedia.org/wiki/VinyLoop
|
This page provides supplementary chemical data on vinyl bromide .
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet ( MSDS ) for this chemical from a reliable source such as SIRI , and follow its directions.
|
https://en.wikipedia.org/wiki/Vinyl_bromide_(data_page)
|
In organic chemistry , a vinyl group (abbr. Vi ; [ 1 ] IUPAC name : ethenyl group [ 2 ] ) is a functional group with the formula −CH=CH 2 . It is the ethylene (IUPAC name: ethene) molecule ( H 2 C=CH 2 ) with one fewer hydrogen atom. The name is also used for any compound containing that group, namely R−CH=CH 2 where R is any other group of atoms.
An industrially important example is vinyl chloride , precursor to PVC , [ 3 ] a plastic commonly known as vinyl .
Vinyl is one of the alkenyl functional groups. On a carbon skeleton, sp 2 -hybridized carbons or positions are often called vinylic . Allyls , acrylates and styrenics contain vinyl groups. (A styrenic crosslinker with two vinyl groups is called divinyl benzene .)
Vinyl groups can polymerize with the aid of a radical initiator or a catalyst, forming vinyl polymers . Vinyl polymers contain no vinyl groups. Instead they are saturated. The following table gives some examples of vinyl polymers.
Vinyl derivatives are alkenes . If activated by an adjacent group, the increased polarization of the bond gives rise to characteristic reactivity, which is termed vinylogous :
Vinyl organometallics, e.g. vinyllithium and vinyl tributyltin , participate in vinylations including coupling reactions such as in Negishi coupling .
The radical was first reported by Henri Victor Regnault in 1835 and initially named aldehydène . Due to the incorrect measurement of the atomic mass of carbon it was believed to be C 4 H 6 at the time. Then in 1839 it was renamed by Justus von Liebig to " acetyl ", because he believed it to be the radical of the acetic acid . [ 4 ]
The modern term was coined by German chemist Hermann Kolbe in 1851, who rebutted Liebig's hypothesis. [ 5 ] However even in 1860 Marcellin Berthelot still based the name he coined for acetylene on Liebig's nomenclature and not on Kolbe's.
The etymology of "vinyl" is the Latin vinum = " wine ", and the Greek word "hylos" 'υλος (matter or material), because of its relationship with ethyl alcohol .
|
https://en.wikipedia.org/wiki/Vinyl_group
|
In organic chemistry , a vinyl iodide (also known as an iodoalkene ) functional group is an alkene with one or more iodide substituents. Vinyl iodides are versatile molecules that serve as important building blocks and precursors in organic synthesis. They are commonly used in carbon-carbon forming reactions in transition-metal catalyzed cross- coupling reactions , such as Stille reaction , Heck reaction , Sonogashira coupling , and Suzuki coupling . [ 1 ] Synthesis of well-defined geometry or complexity vinyl iodide is important in stereoselective synthesis of natural products and drugs .
Vinyl iodides are generally stable under nucleophilic conditions. In S N 2 reactions, back-attack is difficult because of steric clash of R groups on carbon adjacent to electrophilic center (see figure 1a). [ 2 ] In addition, the lone pair on iodide donates into the ╥* of the alkene, which reduces electrophilic character on the carbon as a result of decreased positive charge. Also, this stereoelectronic effect strengthens the C-I bond, thus making removal of the iodide difficult (see figure 1b). [ 3 ] In S N 1 case, dissociation is difficult because of the strengthened C-I bond and loss of the iodide will generate an unstable carbocation (see figure 1c) [ 2 ]
In cross- coupling reactions , typically vinyl iodides react faster and under more mild conditions than vinyl chloride and vinyl bromide. The order of reactivity is based on the strength of carbon-halogen bond. C-I bond is the weakest of the halogens, the bond dissociation energies of C-I is 57.6kcal/mol, while fluoride, chloride and bromide are 115, 83.7, 72.1 kcal/mol respectively. [ 4 ] As a result of having weaker bond, vinyl iodide does not polymerize as easily as its vinyl halide counterparts, but rather decompose and release iodide . [ 5 ] It is generally believed that vinyl iodide cannot survive common reduction conditions, which reduces the vinyl iodide to an olefin or unsaturated alkane . [ 6 ] However, there is evidence in literature, in which a propargyl alcohol 's alkyne was reduced in presence of a vinyl iodide using hydrogen over Pd/CaCO 3 or Crabtree's catalyst . [ 7 ]
Besides using vinyl iodides as useful substrates in transition metal cross- coupling reaction , they can also undergo elimination with a strong base to give corresponding alkyne , and they can be converted to suitable vinyl Grignard reagents . Vinyl iodides are converted to Grignard reagents by magnesium-halogen exchange (see Scheme 1a). [ 8 ] The scope of this synthetic method is limited since it requires higher temperatures and longer reaction time, which affects functional group tolerance. However, vinyl iodide with electron withdrawing group can enhance rate of exchange(see Scheme 1b). [ 8 ] Also addition of lithium chloride helps enhance magnesium-halogen exchange (see Scheme 1c). It is predicted lithium chloride breaks up aggregates in organomagnesium reagents. [ 9 ]
Vinyl iodides are synthesized by methods such as iodination and substitution reaction . Vinyl iodides with well-defined geometry ( regiochemistry and stereochemistry ) are important in synthesis since many natural products and drugs that have specific structure and dimension. Example of regiochemistry is whether the iodide is positioned in either alpha or beta position on the olefin. Stereochemistry such as E-Z notation or cis-trans alkene geometry is important since some transition metal cross- coupling reactions , such as the Suzuki coupling , can retain olefin geometry. In synthesis, it is useful to introduce vinyl iodide at various positions to be set up for a coupling reaction at the next synthetic step. Below are various means and methods in introducing and synthesizing vinyl iodides.
The common and simplest approach to make vinyl iodide is addition of one equivalent HI to alkyne . This generally makes 2-iodo-1-alkenes or α-vinyl iodide by Markovnikov's rule . However, this reaction does not happen at good rates or very high stereoselectively . [ 10 ] As a result, most synthetic methods often involve a hydrometalation step before addition of I+ source.
Introducing an α-vinyl iodide from a terminal position of an alkyne is a difficult step. in addition, the vinyl metal intermediate can be mildly nucleophilic , for example vinyl aluminum, can form C-C bonds under catalytic conditions. However, Hoveyda group have demonstrated using nickel-based catalyst (Ni(dppp)Cl 2 ), DIBAL-H with N -iodosuccinimide (NIS), selectively favor α-vinyl iodide with little to no byproducts. [ 11 ] Also they observed reverse selectivity for β with Ni(PPh 3 ) 2 Cl 2 in their hydroalumination reactions under same conditions with little or no byproducts. The advantage of this method is that is inexpensive (and commercially available), scalable and one-pot reaction.
Another method doesn't involve hydrometalation but hydroiodation with I 2 /hydrophosphine binary system, which was developed by Ogawa's group. [ 12 ]
The hydroiodation proceeds by Markovnikov-type adduct, no reaction is observed without addition of hydrophoshine. In a plausible mechanism proposed by Ogawa's group, the hydrophosphine reacts with HI to form an intermediate complex that coordinate HI to do Markovnikov hydroiodation on the alkene. The advantage of this system is the conditions are mild, can tolerate wide range of functional groups.
They are generally more methods in making β-vinyl iodides versus α-vinyl iodides using hydrometalation (with aluminum with DIBAL-H ( hydroalumination ), with boron ( hydroboration ), with HZrCp 2 Cl ( hydrozirconation )). [ 13 ] However, hydrometalation with alkyne with various functional groups often react poorly with side products. The Chong groups have demonstrated using hydrostannation , using Bu 3 SnH with palladium catalyst with high E stereoselectivity. [ 13 ] They observed using sterically bulky ligands gave higher regioselectivity for β-vinyl iodide. The advantage of this technique is this technique can tolerate a wide range of functional groups.
Z selective β-vinyl iodides are slightly more difficult to introduce than E-β-vinyl iodides, often requiring more than one step. Hydroalumination and hydroboration usually proceed by syn fashion, therefore selectively favors E geometry. The Oshima group have demonstrated using hydroindation with HInCl selectively favors Z geometry. [ 14 ] They suggested that the reaction proceeds by a radical mechanism. They predict that HInCl adds to alkyne by radical addition in a Z geometry. It does not isomerized to E geometry because of low reactivity of radical InCl 2 with intermediate complex (no second addition). If second addition occurs then isomerization will occur through diindium intermediate. They confirm a radical mechanism in a mechanistic study with alkyne and alkene cyclization.
Substitution is perhaps most useful method in introducing vinyl iodide into the molecule. Halogen-exchange can be useful since vinyl iodides are more reactivity than other vinyl halides . Buchwald group demonstrates a halogen-exchange from vinyl bromide to vinyl iodide with copper catalyst under mild conditions. [ 15 ] It is possible that this method can tolerate various functional groups since these conditions were tested aryl halides initially. The scope of this exchange for regiochemistry and stereochemistry is currently unexplored.
Halogen-exchange can also be done with zirconium derivatives that retain olefin ’s geometry [ 16 ]
The Marek group have further investigated using zirconium catalyst on E or Z vinyl ethers , which selective for E-vinyl ethers. [ 16 ] The zirconium's oxophilic nature allows elimination alkoxy group at the β position to form intermediate vinyl zirconium complex. The E geometry selectivity is not cause by sterics but rather the reaction itself is not concerted. In a mechanistic study, they observed isomerization , which suggest E geometry product is more favored than Z geometry. The difference of results between halogen exchange and E-vinyl ether reaction is that only when there is a presence of an oxonium intermediate, is isomerization observed.
An interesting substitution reaction is vinyl boronic acid to vinyl iodide done by Brown's group. [ 17 ] Depending on order of addition of iodide or base, vinyl borate can yield different stereoisomers of vinyl iodide (see scheme 2a). The Whiting group, however, noticed that Brown's method was not applicable to more sterically hindered boronic esters (no reaction). [ 18 ] They proposed that the iodide source was not electropositive enough. So they decided to use ICl which is more polar than I 2 , in which, they observed similar results (see scheme 2b).
Radical substitution of carboxylic acid to iodide is demonstrated by a modified Hunsdiecker reaction . [ 19 ] Homolytic cleavage of O-I bond generates CO 2 and vinyl radical. Vinyl radical recombines with iodide radical to form vinyl iodide.
Iododesilylation is a substitution reaction of silyl group for iodide. The advantages of iododesilylation are that it avoids toxic tin reagent and intermediate vinyl silyl are stable, nontoxic and easily handled and stored. Vinyl silyl can be made from terminal alkyne or other methods.
The Kishi's group reported a mild preparation of vinyl iodide from vinyl silyl using NIS in mixture of acetonitrile and chloroacetonitrile . [ 20 ] They observed retention of olefin geometry in some vinyl silyl substrates while inversion in others. They reasoned that the R group's size had an effect on the geometry of the olefin. If the R group is small, the solvent acetonitrile can participate in the reaction leading to inversion of the olefin's geometry. If the R group is big, the solvent is unable to participate, leading to retention of olefin's geometry
Zakarian's group then decided to run the reaction in HFIP , which gave high retention of olefin geometry. [ 21 ] They reasoned that HFIP is a low nucleophilicity solvent, unlike acetonitrile . In addition, they observed accelerated reaction rate because HFIP activate NIS by hydrogen bonding .
Unfortunately, iododesilylation under those conditions (above) can potentially yield multiple byproducts in highly functionalized molecules with oxygen functional groups . Vilarrasa and Costa's group hypothesized that radical reactions producing HI and I 2 help facilitate cleavage in alcohol's protecting group and may add into other alkene bonds. [ 22 ] They experimented with the use of silver additives such as silver acetate and silver carbonate in which the silver can react with the excess iodide to form silver iodide . They achieved better conversions with these conditions.
Some famous vinyl iodide synthesis methods involve conversion of aldehyde or ketone to vinyl iodide. Barton's hydrazone iodination method involves addition of hydrazines to aldehyde or ketone to form hydrazone . Then the hydrazone is converted to vinyl iodide by addition of iodide and DBU . [ 23 ] [ 24 ] This method has been used in natural product synthesis of Taxol by Danishefsky [ 25 ] and Cortistatin A by Shair. [ 26 ] Another method is the Takai olefination which uses iodoform and chromium(II) chloride to make vinyl iodide from aldehyde with high stereoselectivity for E geometry. [ 27 ] For high stereoselectivity for Z geometry, Stork-Zhao olefination proceeds by Wittig -like reaction. High yields and Z stereoselectivity occurred at low temperature and at the presence of HMPA . [ 28 ]
Below is example of employing both Takai olefination and Stork-Zhao olefination in total synthesis of (+)-3-(E)- and (+)-3-(Z)-Pinnatifidenyne. [ 29 ]
Vinyl iodides are rarely by made an elimination reaction of vicinal diiodide because it tends to decompose to alkene and iodide. [ 30 ] The Baker group have shown using decarboxylation, elimination can occur. [ 31 ]
|
https://en.wikipedia.org/wiki/Vinyl_iodide_functional_group
|
Vinyl roof refers to a vinyl covering for an automobile's top. [ 1 ]
This covering was originally designed to give the appearance of a convertible to models with a fixed roof and eventually evolved into a styling statement in its own right. Vinyl roofs were most popular in the North American market, and they are considered one of the period hallmarks of the 1970s domestic cars. Vinyl roofs were also popular on European- (especially UK-) and Japanese-built cars during the 1970s, and tended to be applied to sporting or luxury trim versions of standard saloon (sedan) models.
The vinyl roof cover was during the 1920s as a necessity to keep precipitation from occupants of the car. [ 2 ] Other materials included leather and canvas. Some coverings replicated the appearance of a movable top, similar to those on horse carriages, along with landau bars. [ 2 ]
The use of vinyl to cover the roofs of regular automobiles was to "give fixed-roof cars some of the flair and appeal of their convertible counterparts." [ 3 ] An example is the 1928 - 1929 Ford Model "A" Special Coupe, featuring a roof completely covered with a vinyl-like material. This Model "A" Special Coupe's vinyl roof had two exposed seams on the back corners, with a lateral seam on the top covered with a narrow trim strip.
The technique fell out of favor in the 1930s as painted steel was considered a better roof. [ 2 ] Smoother, pontoon bodies began to be fashionable with metal.
After World War II, the first example of using a fabric-covered top as a styling element, rather than a functional accessory, was the 1949 Kaiser Virginian. [ 3 ] This four-door sedan model was a fixed-roof version of the Kaiser Manhattan four-door convertible and the roof was covered with the same nylon fabric as the convertible.
The pillarless hardtop body style was introduced to resemble convertibles. Because Ford did not have a hardtop body style offered by General Motors and Chrysler, a vinyl-covered roof was optional on the 1950 two-door Ford Crestliner, Mercury Monterey, and Lincoln Lido models as an effort to simulate the look of a convertible. [ 3 ] This was not popular and the vinyl-covered models were discontinued the following year as Ford introduced pillarless hardtop models for 1951. [ 3 ]
Kaiser Motors introduced a unique trim option in 1951 for their all-new full-size four-door sedans. [ 4 ] The interior vinyl upholstery featured simulated reptile pattern and an optional padded vinyl covered roof with a lizard skin pattern that was named "Dinosaur." [ 4 ] [ 5 ] The automaker introduced a special luxury model, the Kaiser Dragon for the 1953 model year. [ 6 ] [ 7 ] In addition to numerous standard equipment and features such as 14- karat gold plated hood ornament and nameplates, the cars special upholstery and padded roof now featured a grass-patterned "Bambu" vinyl. [ 4 ] [ 8 ]
Lincoln simulated the appearance of a convertible on some Cosmopolitan coupes in the 1950s, as did the Kaiser on Manhattan sedans, although the material was still canvas as was used on the folding tops of convertibles of the time instead of vinyl. [ 3 ]
For the 1959 model year, the Chrysler 's Imperial featured a Landau top and the 1959 Desoto Adventurer Sportsman hardtop had a full roof that was not covered with vinyl, but both models had a textured black paint that was designed to look like leather. [ 3 ]
Probably the first modern vinyl roof as it would later be accepted was the 1956 Cadillac Eldorado Seville that came standard with a roof covered in an early vinyl material called "Vicodec" which was simply diamond point convertible fabric. [ 3 ] The recommended cleaning methods were the same for the Eldorado as regular convertible tops. [ 9 ] The fabric was applied over a thin pad with two parallel seams running the length of the roof to mimic the appearance of a convertible model. [ 3 ] Sales were low, but the Eldorado Seville with its vinyl roof was produced until 1960. [ 3 ]
Ford followed a few years later with a vinyl roof option on the 1962 Ford Thunderbird that also re-introduced landau bars as a styling element. The vinyl covering proved popular, and some form of vinyl trim would be seen on Thunderbird roofs for the next two decades.
Other manufacturers followed. Vinyl appeared on some coupe models in GM's 1962 full-size line. Chrysler made a vinyl roof available on the Dodge Dart . [ 1 ] Ford offered it on the Mustang . By mid-decade, four-door sedans, as well as coupes and station wagons offered by all automakers could be topped with several colorful types of vinyl.
Vinyl-covered roofs became very common in most car classes by the late-1960s. Vinyl was produced that mimicked other materials such as canvas, and even alligator or snake hide. Chrysler briefly produced some patterns, with paisley or floral designs – this was called the "Mod Top" option. The Mercury Cougar briefly offered a houndstooth pattern. There was even an aftermarket spray-on product that claimed to add that factory vinyl look. By 1972, even the Ford Pinto offered a vinyl roof option.
At about that same time, the modern opera window first appeared, and it went so well with a vinyl surround that the two together became emblematic of American body design in the 1970s. During this period, vinyl with padding under it was sometimes used, allowing the top to somewhat mimic the feel as well as the look of a genuine convertible.
European and Japanese manufacturers offered vinyl-covered roofs. Chrysler's European-built cars used it on upmarket models of its Hunter and Avenger saloons. Ford had vinyl roofs on its European Escorts, Cortinas, Taunuses, and Granadas into the early 1980s. British Leyland had vinyl roofs on the last Wolseley and top-end Princess models and the feature was optional for all other models. Volvo featured a standard vinyl top on its most luxurious two-door 262 C coupe. [ 10 ] Toyota adopted vinyl roofs for its Corona Mark II , Crown and Century sedans in the mid-1970s, and they could be found on Nissan Laurels , Cedrics , and Glorias .
Vinyl-covered roofs continued to appear in many car lines through the 1980s, but the coming of the "aero look," first introduced to the U.S. market by the 1983 Thunderbird, tended to militate against both opera windows and vinyl roofs, as their more formal style did not go well with the sleek profile designers were beginning to emphasize. Canvas-look tops, often called cabriolet roofs, with simulated convertible top bows under the fabric, gained some popularity. The availability of all vinyl styles dwindled in the 1990s, until the 2002 Lincoln Continental offered one of the last factory-applied versions.
Hearse and limousine bodies almost universally still have vinyl tops. [ citation needed ] Not only are they part of the expected style of those vehicles, but they have a practical advantage in covering up the welded body seams that result when standard sedans are stretched to greater length.
Aftermarket customizers also continue to install vinyl roofs of various types. These are usually seen on Cadillacs and Lincolns, but can be fitted to virtually any car.
Several vinyl roof designs evolved during the 1960s and 1970s:
The opera window was mounted in this pillar and was surrounded by sheet metal, not adjacent to either vinyl area. Three windows were mounted on each side of these cars; the Fairmont Futura had a similar style, differing only in not using the center opera window.
A comparable two-piece roof covering was available on the AMC Pacer that emphasized the bump in the roof that accommodated the roll bar over the passenger compartment. [ 11 ]
|
https://en.wikipedia.org/wiki/Vinyl_roof
|
A vinyl sulfone is an organic compound with the formula O 2 S(CH=CH 2 ) 2 . The molecule consisting of two vinyl groups bonded to a sulfone . It is the parent of several vinyl sulfones of the type O 2 S(CH=CH 2 )R . [ 1 ] Many vinyl sulfones are known.
Examples include phenyl vinyl sulfone, [ 2 ] methyl vinyl sulfone, [ 3 ] and ethyl vinyl sulfone. [ 4 ]
Divinyl sulfone is prepared from the diacetate bis(2-hydroxyethyl)sulfide. Oxidation of this diester with hydrogen peroxide gives the sulfone. The sulfone is then pyrolyzed to induce elimination of two equivalents of acetic acid: [ 5 ]
Other vinyl sulfones are prepared analogously to the vinyl sulfone, i.e. by H 2 O 2 -oxidation of the sulfide . [ 6 ]
Vinyl sulfones are dienophiles . Subsequent to the cycloaddition to a vinyl sulfone, the phenylsulfonyl group can be removed by reduction with zinc. [ 7 ]
Vinyl sulfones are Michael acceptors . [ 8 ] Vinyl sulfones add thiols , such as cysteine residues. [ 9 ] This same reactive nature is responsible for their major industrial use in vinyl sulfone dyes . [ 10 ]
Phenyl vinyl sulfone has been applied to ruthenium chemistry as part of olefin metathesis reactions. [ 11 ]
Vinyl sulfone has applications to protein purification , especially when linked with mercaptoethanol . [ 12 ]
Vinyl sulfone has uses as a molluscicide pesticide . [ 13 ]
Like similar compounds, vinyl sulfone is a lacrymator and skin irritant. These properties are somewhat mitigated because of its low volatility. [ 8 ]
This article about medicinal chemistry is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vinyl_sulfone
|
In chemistry , vinylene (also ethenylene or 1,2-ethenediyl ) [ 1 ] is a divalent functional group (a part of a molecule ) [ 2 ] with formula −CH=CH−; [ 3 ] namely, two carbons, each connected to the other by a double bond, to an hydrogen atom by a single bond, and to the rest of the molecule by another single bond. [ 4 ] [ 5 ]
This group can be viewed as a molecule of ethene (ethylene, H 2 C=CH 2 ) with a hydrogen removed from each carbon; or a vinyl group −CH=CH 2 with one hydrogen removed from the terminal carbon. [ 6 ] [ 7 ] [ 8 ] It should not be confused with the vinylidene group =C=CH 2 or >C=CH 2 .
A vinylene unit attached to two distinct atoms other than hydrogen (namely R−CH=CH−R') is a source of cis-trans isomerism . [ 9 ] [ 10 ]
The vinylene group is the repeating unit in polyacetylene and in polyenes . [ 11 ]
This article about an alkene is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vinylene_group
|
In chemistry , vinylidenes are compounds with the functional group C=CH 2 . An example is 1,1-dichloroethene (CCl 2 =CH 2 ) commonly called vinylidene chloride . It and vinylidene fluoride are precursors to commercially useful polymers.
Vinylidene chloride and fluoride can be converted to linear polymers polyvinylidene chloride (PVDC) and polyvinylidene fluoride (PVDF). The polymerization reaction is:
These vinylidene polymers are isomeric with those produced from vinylene monomers. Thus polyvinylene fluoride from vinylene fluoride (HFC=CHF).
Although vinylidenes are only transient species, they are found as ligands in organometallic chemistry . They typically arise by the protonation of metal acetylides or by the reaction of metal electrophiles with terminal alkynes. The complex chloro(cyclopentadienyl)bis(triphenylphosphine)ruthenium readily forms such complexes: [ 1 ]
Featuring divalent carbon, vinylidenes are unusual species in organic chemistry. They are unstable as solids or liquids but can be generated as stable dilute gases. The parent member of this series is methylidenecarbene . With the formula :C=CH 2 ), it is a carbene .
In IUPAC nomenclature, 1,1-ethenediyl describes the connectivity >C=CH 2 . The related species ethenylidenes have the connectivity =C=CH 2 . [ 3 ]
|
https://en.wikipedia.org/wiki/Vinylidene_group
|
In organic chemistry , vinylogy is the transmission of electronic effects through a conjugated organic bonding system. [ 1 ] The concept was introduced in 1926 by Ludwig Claisen to explain the acidic properties of formylacetone and related ketoaldehydes . Formylacetone, technically CH 3 (C=O)CH 2 CH=O , only exists in the ionized form CH 3 (C−O − )=CH−CH=O or CH 3 (C=O)−CH=CH−O − . [ 2 ] Its adjectival form, vinylogous , is used to describe functional groups in which the standard moieties of the group are separated by a carbon–carbon double bond .
For example, a carboxylic acid is defined as a carbonyl group ( C=O ) directly attached to a hydroxyl group ( OH ): O=C–OH. A vinylogous carboxylic acid has a vinyl unit ( −HC=CH− , vinylene) between the two groups that define the acid: O=C–C=C–OH. The usual resonance of a carboxylate can propagate through the alkene of a vinylogous carboxylate. Likewise, 3-dimethylaminoacrolein is the vinylogous-amide analog of dimethylformamide .
Due to the transmission of electronic information through conjugation, vinylogous functional groups often possess " analogous " reactivity or chemical properties compared to the parent functional group. Hence, vinylogy is a useful heuristic for the prediction of the behavior of systems that are structurally similar but contain intervening C=C bonds that are conjugated to the attached functional groups. For example, a key property of carboxylic acids is their Brønsted acidity . The simplest carboxylic acid, formic acid ( HC(=O)−OH ), is a moderately strong organic acid with a p K a of 3.7. We would expect vinylogous carboxylic acids to have similar acidity. Indeed, the vinylog of formic acid, 2-formyl-1-ethen-1-ol, HC(=O)−CH=CH−OH has a substantial Brønsted acidity, with an estimated p K a ~ 5–6. In particular, vinylogous carboxylic acids are substantially stronger acids than typical enols (p K a ~ 12). Vitamin C ( ascorbic acid , see below ) is a biologically important example of a vinylogous carboxylic acid.
The insertion of a o- or p - phenylene (i.e., a benzene ring in the 1,2- or 1,4-orientation) also results in some similarities in reactivity (called "phenylogy"), although the effect is generally weaker, as conjugation through the aryl ring requires consideration of resonance forms or intermediates in which aromaticity is disrupted. [ 3 ] [ 4 ]
Vinylogous reactions are believed to occur when orbitals of the double bonds of the vinyl group and of an attached electron-withdrawing group (EWG; the π orbitals) are aligned and so can overlap and mix (i.e., are conjugated ). Electron delocalization enables the EWG to receive electron density through participation of the conjugated system.
A classic example of vinylogy is the relatively high acidity of the γ-hydrogen in CH 3 CH=CHC(O)R . The acidity of the terminal methyl group is similar to that for the methyl ketone CH 3 C(O)R . [ 5 ]
Vinylogous reactions also include conjugate additions , where a nucleophile reacts at the vinyl terminus, akin to the addition of the nucleophile to the carbonyl of the methyl ketone. In a vinylogous variation of the aldol reaction , an electrophile is attacked by a nucleophilic vinylogous enolate (see first and following image). The vinylogous enolate reacts at the terminal position of the double bond system (the γ-carbon), rather than the α-carbon immediately adjacent to the carbonyl, as would a simple enolate. Allylic electrophiles often react by vinylogous attack of a nucleophile rather than direct addition.
A further example of vinylogous reactivity: ascorbic acid (Vitamin C) behaves as a vinylogous carboxylic acid by involvement of its carbonyl moiety, a vinyl group within the ring, and the lone pair on the hydroxyl group acting as the conjugated system . Acidity of the hydroxyl proton at the terminus of the vinyl group in ascorbic acid is more comparable to a typical carboxylic acid than an alcohol because two major resonance structures stabilize the negative charge on the conjugate base of ascorbic acid (center and right structures in last image), analogous to the two resonance structures that stabilize the negative charge on the anion that results from removal of a proton from a simple carboxylic acid (cf. first image).
|
https://en.wikipedia.org/wiki/Vinylogy
|
Viologens are organic compounds with the formula (C 5 H 4 NR) 2 n+ . In some viologens, the pyridyl groups are further modified. [ 1 ]
Viologens are called so, because these compounds produce violet color on reduction [violet + Latin gen , generator of].
The viologen paraquat (R = methyl), is a widely used herbicide . As early as in the 1930s, paraquat was being used as an oxidation-reduction indicator, because it becomes violet on reduction. [ 2 ]
Other viologens have been commercialized because they can change color reversibly many times through reduction and oxidation . The name viologen alludes to violet, one color it can exhibit, and the radical cation (C 5 H 4 NR) 2 + is colored intensely blue.
As bi pyridinium derivatives, the viologens are related to 4,4'-bipyridyl . The basic nitrogen centers in these compounds are alkylated to give viologens:
The alkylation is a form of quaternization . When the alkylating agent is a small alkyl halide , such as methyl chloride or methyl bromide , the viologen salt is often water-soluble. A wide variety of alkyl substituents have been investigated. Common derivatives are methyl (see paraquat ), long chain alkyl, and benzyl.
Viologens, in their dicationic form, typically undergo two one-electron reductions. The first reduction affords the deeply colored radical cation: [ 3 ]
The radical cations are blue for 4,4'-viologens and green for 2,2'-derivatives. The second reduction yields a yellow quinoid compounds:
The electron transfer is fast because the redox process induces little structural change . The redox is highly reversible. These reagents are relatively inexpensive among redox-active organic compounds. They are convenient colorimetric reagents for biochemical redox reactions.
Their tendency to form host–guest complexes is key to the molecular machines recognized by the 2016 Nobel Prize in Chemistry .
Viologens are used in the negative electrolytes of some experimental flow batteries . Viologens have been modified to optimize their performance in such batteries, e.g. by incorporating them into redox-active polymers. [ 6 ]
Viologen catalysts have been reported to have the potential to oxidize glucose and other carbohydrates catalytically in a mildly alkaline solution, which makes direct carbohydrate fuel cells possible. [ 7 ]
Diquat is an isomer of viologens, being derived from 2,2'-bipyridine (instead of the 4,4'-isomer). It also is a potent herbicide that functions by disrupting electron-transfer.
Extended viologens have been developed based on conjugated oligomers such as based on aryl , ethylene , and thiophene units are inserted between the pyridine units. [ 8 ] The bipolaron di-octyl bis(4-pyridyl)biphenyl viologen 2 in scheme 2 can be reduced by sodium amalgam in DMF to the neutral viologen 3 .
The resonance structures of the quinoid 3a and the biradical 3b contribute equally to the hybrid structure. The driving force for the contributing 3b is the restoration of aromaticity with the biphenyl unit. It has been established using X-ray crystallography that the molecule is, in effect, coplanar with slight nitrogen pyramidalization , and that the central carbon bonds are longer (144 pm ) than what would be expected for a double bond (136 pm). Further research shows that the diradical exists as a mixture of triplets and singlets , although an ESR signal is absent. In this sense, the molecule resembles Tschischibabin's hydrocarbon , discovered during 1907. It also shares with this molecule a blue color in solution, and a metallic-green color as crystals.
Compound 3 is a very strong reducing agent , with a redox potential of −1.48 V.
The widely used herbicide paraquat is a viologen. This application is the largest consumer of this class of compounds. The toxicity of the 2,2'-, 4,4'-, or 2,4'-bipyridylium-based viologens is related to their ability to form stable free radicals . This redox activity allows these species to interfere with the electron transport chain in the plant. [ 9 ] [ 10 ] [ 11 ]
Viologens have been commercialized as electrochromic systems because of their highly reversible and dramatic change of color upon reduction and oxidation . In some applications, N-heptyl viologens are used. Conducting solid supports such as titania and indium tin oxide have been used. [ 4 ]
|
https://en.wikipedia.org/wiki/Viologen
|
Viral biological control is the implementation of viruses to control or deplete pest populations. Viruses have high host specificity allowing targeted infections that are unlikely to impact other organisms. Viral biological control methods are studied and used globally, for sustainable agricultural practices, controlling invasive species , and disease management in humans and food. Viral biological control is heavily researched as alternative methods to chemical pest control methods, as viruses are made from natural genetic material and will biodegrade. Whereas researchers are using viruses as a selective control protocol when targeting invasive species. Bacteriophages are being implemented and explored to combat diseases and food borne diseases.
Viral biological control methods may have been studied as early as 2700 BC in China for pest control management for silkworms. [ 1 ] However, the earliest documented case of viral pest control was recorded in the late 1800s and early 1900s. The first use of viruses for insect pest management occurred in 1892 when nuclear polyhedrosis viruses were released in Germany to protect pine trees from Lymantria monacha , Black arches. [ 2 ] Virus implementations as pesticides have been studied around the world as pests are a global issue impacting all types of terrains, climates, and organisms. In 1896, the first findings of bacteriophage, bacterial viruses, and antibacterial elements were in the Ganges and Jumna rivers in India; scientists took note of the decline in Vibrio cholerae , cholera, and later identified [ 3 ] Utilization of bacteriophages, for combating bacterial infections of plants was documented in 1924 when scientists Mallman and Hemstreet, discovered a liquid secreted by cabbage preventing rot from Xanthomonas campretris , Black rot . [ 4 ] The following year, scientists Kotila and Coons isolated phages directed against Pectrobactrium carotovorum, Blackleg disease , in potatoes. [ 4 ] In the early 1960s, after beginning research in the 1950s, scientists in China utilized multiple viruses, including for Agrotis segetum and Apamea sordens , to target insect pests, to protect agriculture, pastures, and gardens. [ 5 ] The first usage of viruses as biological pest control in the United States was in 1970 when viral-based insecticides were used to deplete the population of Helicoverpa armigera , Cotton Bollworm , a voracious moth that eats cotton and other crops. [ 6 ]
Invasive animals are a global issue causing ecological damage and filling niches of indigenous species. Utilizing viruses to control animal population reduces invasive populations and reduces animal vectors for diseases.
Insects are the main vectors for spreading diseases for all organisms. Insect vectors disperse pathogens through their travel, direct contact, and interaction with organisms. In China, over 32 virus species are implemented for biological control to uphold agriculture, forestry, and domestic areas in China, and have a .2% prevalence in China’s overall insecticide protocols. [ 5 ] These insect viruses, include Helicoverpa armigera nucleopolyhedrovirus, Mamestra brassicae nucleopolyhedrovirus, Sprodoptera litura nucleopolyhedrovirus, and Periplaneta fuliginosa densovirus, and many of these viruses were genetically altered to increase infection rate and resistant to UV-light, a main obstacle as most viruses are UV-light sensitive. [ 5 ] Various species of Lepidoptera such as Spodoptera exempta , the African armyworm, and Lymantria dispar dispar , the gypsy moth; both have higher reproduction rates and have sporadic outbreaks causing ecological destruction. [ 1 ] They found Spodoptera exempta are most susceptible to neuropolyhedrovirues at the larvae stage, and the virus can be transmitted both horizontally and vertically, remaining latent until sudden expression of the virus. [ 1 ] Triggers to the expression or infection of the virus in case of vertical transmission remain unknown. [ 1 ] Similar studies found neuropolyhedrovirues to affect Spodoptera exempta , aiding in large outbreaks, but noted high tannin levels, a chemical found in woody plants, reduced viral virulence . [ 1 ]
Rodents are a leading invasive species as vectors of pathogens, filling habitat niches, and overgrazing plants. Oryctolagus cuniculus , European rabbits, were introduced to Australia in 1788 as livestock and released for Europeans to hunt. [ 7 ] Soon, the rabbits became widespread and changed Australia's ecological systems from overgrazing. In the 1950s, researchers implemented Myxoma virus (MYXV), a virus indigenous to South America that is transmitted by arthropods like mosquitoes, fleas, and ticks, to reduce rabbit populations. [ 8 ] The virus did not transmit well and died off during this first attempt. [ 8 ] However, scientists implemented the Myxoma virus in Europe where Oryctolagus cuniculus is also abundant and destructive, and found more promising results. [ 8 ] Europe is more humid, thus attracting more arthropod vectors, whereas the area they dispersed in Australia is more arid, and they are released during the fall season in Australia. [ 8 ] However, researchers continue investigating the Myxoma virus along with other viruses that will manage the Oryctolagus cuniculus populations and other hare populations globally. Other virus studies include Californian MYXV, Rabbit Fibroma Virus, Hare Fibroma Virus, Squirrel Fibroma Virus, and other species of Leporipoxvirus . [ 8 ] Similar to the Myxoma virus, most are transmitted by arthropods like mosquitoes, mites, fleas, and ticks, but target different parts of the hare.
Bacterial pathogens are a leading problem across organismal species, having many vectors, reservoirs, and direct infections. Utilizing viruses in the form of bacteriophages is showing progress in treatments of food-borne ailments and bacterial diseases. Scientists use phages as markers and indicators for food contamination because the phage needs a host, thus viral presence indicates a pre-existing host. [ 3 ] Campylobacter , Salmonella , and Listeria monocytogenes showed promising results using bacteriophages to prevent meat spoilage and showed a reduction in numbers after viral methods were implemented. [ 3 ] The virus is administered post-slaughter on the meat, but cases of treating livestock before slaughter also show reduced concentrations of food-borne bacteria. [ 3 ] Lytic bacteriophages, viruses that lyse bacteria cells, P100 and A511 have been isolated and are accepted viruses for Listeria monocytogenes control in meats as the bacterium has a death rate of 25-30% when infected. [ 9 ] Disease management using viruses is a leading study, but few have been approved for treatment. Staphylococcus aureus is prevalent in 30% of humans but can lead to lethal infections, sepsis , and death . Studies find lytic bacteriophage phiAGO1.3 to reduce host infection and reduce symptoms to latent. [ 9 ]
Researchers are also utilizing phages for plant bacterial infections for ecological protection and reduction of invasive species. Bacteriophages are abundant in both marine and terrestrial habitats, but mostly undersoil or in areas with low UV-light exposure. [ 4 ] In oceans, bacteriophages are important for nutrient cycles, degrading organic materials, and regulating bacteria growth in aquatic ecosystems. [ 9 ] Lytic bacteriophage BONAISHI was dispersed in coral reef habitats and reduced Vibrio coralliilyticus , a bacteria responsible for severe global coral reef bleachings. [ 9 ] Biofilms are abundant in both nature and in medical facilities and equipment. Lytic phages were also used to treat Pseudomonas aeruginosa for waste management and water protection. [ 9 ] They found the viruses reduced the size of the biofilm, but research continues as biofilms are accumulations of multiple bacteria species, thus multiple viruses will be needed. [ 9 ] In combating the loss of plants, researchers isolated two strains of Ralstonia solanacearum UA1591 against Ralstonia solanacearum , Moko Disease or bacterial wilt disease, in banana and plantains. [ 10 ] Strains of Ralstonia solanacerum also affect tomato plants, and researchers narrowed phages φRSA1, φRSB1, φRSC1, and φRSL1 to control wilting and rot for 18 days. [ 10 ]
Mycoviruses are explored for agriculture sustainability practice to protect plants. Fungi are a main pathogen with approximately 10,000 pathogenic species to plants. [ 11 ]
Cryphonectria parasitica or Chestnut blight, is a fungus indigenous to China, Japan, and Korea but spread to the United States from importation of Japanese Chestnut trees from Honshu . [ 12 ] The first findings were in 1904 in New York City , US following in 1938 near Genoa , Italy due to trade and importation with the US. [ 12 ] In the 1950s, researchers looked into hypovirulence , the use of viruses from the Hypovirus genus and Hypoviridae family, to target Cryphonectria parasitica to stop the fungal spread and damage. [ 13 ] In 1978, Italy implemented the first hypervirulent strain against Cryphonectria parasitica on chestnut trees and reported the elimination of cankers from the blight within ten years of treatment. [ 13 ] The researchers treated ten cankers per hectare with the fungal virus for the first three years of treatment, followed by treating five cankers per hectare for the years afterward. [ 13 ] The main species of Hypoviruses has four variants of Cryphonectria hypovirus : CHV-1, CHV-2, CHV-3, and CHV-4. CHV-1 is the most studied variation implemented in Europe and was first introduced in Italy. CHV-1 shows the most reduced virulence of Cryphonectria parasitica , but later studies confirm CHV-2 and CHV-3 also lower virulence, while CHV-4 does not show significant violence reduction in Cryphonectria parasitica . [ 12 ] Cryphonectria hypovirus targets the fungus’s cytoplasm , transmission horizontally through asexual spores , and vectors such as mites that feed on plants or fungi can spread the virus. [ 12 ]
Viruses reside in all plant populations and are under consideration for pre immunization or cross-protection against other plant viruses to lower infection rates and symptoms. [ 14 ] The use of viral satellites, or subviral agents that rely on an additional virus to infect the host, show pre-immunization effects by inhibiting the virulence of viruses if the virus is unable to attach and infect the host’s cells. [ 15 ] Cucumber mosaic virus causes stunted growth in plants, discolored leaves, necrosis, and other symptoms leading to plant death. Utilization of the viral satellites, CMV-KU1 and CMV-KU2, altered the virus phenotypic expression, reducing violence due to the interaction with the subviral particle (Monstasser, 2013). Other implementations of viruses include controlling weeds and other invasive plant species. Weeds and invasive species are main contributors to indigenous plant loss and agricultural plant loss. Researchers have isolated several viruses including Tobacco Mild Green Mosaic Tobamovirus, Tobacco Rattle Virus , and Arujia Mosaic Virus to combat invasive weeds. Tobacco Mild Green Mosaic Tobamovirus is used to control Solanum viarum , tropical soda apple, due to the weed's fast reproductive cycle, and quick spread, disrupting indigenous plant species in Florida. [ 16 ] Araujia Mosaic Virus is used against the vining Araujia hortorum , moth plant, in New Zealand. [ 16 ] The combination and individual use of Óbuda Pepper Virus and Pepino Mosaic Virus show reducing Solanum nigrum , Black nightshade, growth specifically in Europe. [ 16 ]
Viral biological control is a recently documented study but is rapidly expanding and being fine-tuned. Some complications of viral control methods include viral resistance of the pathogen and potential off-target effects. [ 17 ] [ 18 ] [ 4 ] Pathogens over time become resistant to pesticides and have shown to become resistant to viruses, mitigating their effects, like the European rabbits in Australia becoming resistant to Myxoma virus. [ 8 ] There is risk of off-target effects; viruses are specific to hosts but can mutate to infect other organisms, but there have not been many cases of viral resistance, so this raises low concern of this problem. [ 14 ] Conversely, because viruses are specified to their host, multiple strains of viruses in the same viral family might not be effective on the same organism. Research of finding the correct viral strains to combat the invasive organism, invasive is costly and takes longer for research. [ 14 ] Other concerns, specifically treating invasive plants and fungus, is the UV-light exposure as this causes genomic damage to the virus, disrupting their function and reduces biological control effects. [ 4 ]
Nonetheless, researchers are finding viral biological control methods to be an effective alternative to other pest control managements like chemical pesticides as viruses biodegrade, turn the soil, and present in all ecosystems. [ 14 ] Despite the initial costs for research, scientists find production and development cost of viral control agents to be cheaper compared to other herbicides. [ 16 ] Among Canadian consumers, 70% concluded preferences using biological control methods such as viruses over the use of synthetic pesticides. [ 16 ] Viral biological control is used globally and further research is being conducted to strengthen current viral biological control usage. Europe is further looking into Cryphonectria hypoviruses : CHV-1, CHV-2, CHV-3, and CHV-4 to find better viral cocktails and uses for the virus for other fungi. [ 12 ] Virus utilization as insecticides are in heavy research as insects are main vectors for diseases, and cause ecosystem degradation. However, research for using insects as a viral reservoir are being considered for combating fungal, plant, and animal control mechanisms due to insects' wide range and dispersal. [ 19 ]
|
https://en.wikipedia.org/wiki/Viral_biological_control
|
Viral dynamics is a field of applied mathematics concerned with describing the progression of viral infections within a host organism. [ 1 ] It employs a family of mathematical models that describe changes over time in the populations of cells targeted by the virus and the viral load . These equations may also track competition between different viral strains and the influence of immune responses. The original viral dynamics models were inspired by compartmental epidemic models (e.g. the SI model), with which they continue to share many common mathematical features, such as the concept of the basic reproductive ratio ( R 0 ). The major distinction between these fields is in the scale at which the models operate: while epidemiological models track the spread of infection between individuals within a population (i.e. "between host"), viral dynamics models track the spread of infection between cells within an individual (i.e. "within host"). Analyses employing viral dynamic models have been used extensively to study HIV , [ 1 ] [ 2 ] hepatitis B virus , [ 3 ] [ 4 ] and hepatitis C virus , [ 5 ] [ 6 ] among other infections
|
https://en.wikipedia.org/wiki/Viral_dynamics
|
Viral entry is the earliest stage of infection in the viral life cycle , as the virus comes into contact with the host cell and introduces viral material into the cell. The major steps involved in viral entry are shown below. [ 1 ] Despite the variation among viruses, there are several shared generalities concerning viral entry. [ 2 ]
How a virus enters a cell is different depending on the type of virus it is. A virus with a nonenveloped capsid enters the cell by attaching to the attachment factor located on a host cell. It then enters the cell by endocytosis or by making a hole in the membrane of the host cell and inserting its viral genome. [ 2 ]
Cell entry by enveloped viruses is more complicated. Enveloped viruses enter the cell by attaching to an attachment factor located on the surface of the host cell. They then enter by endocytosis or a direct membrane fusion event. The fusion event is when the virus membrane and the host cell membrane fuse together allowing a virus to enter. It does this by attachment – or adsorption – onto a susceptible cell; a cell which holds a receptor that the virus can bind to, akin to two pieces of a puzzle fitting together. The receptors on the viral envelope effectively become connected to complementary receptors on the cell membrane . This attachment causes the two membranes to remain in mutual proximity, favoring further interactions between surface proteins. This is also the first requisite that must be satisfied before a cell can become infected. Satisfaction of this requisite makes the cell susceptible. Viruses that exhibit this behavior include many enveloped viruses such as HIV and herpes simplex virus . [ 2 ]
These basic ideas extend to viruses that infect bacteria, known as bacteriophages (or simply phages). Typical phages have long tails used to attach to receptors on the bacterial surface and inject their viral genome.
Prior to entry, a virus must attach to a host cell. Attachment is achieved when specific proteins on the viral capsid or viral envelope bind to specific proteins called receptor proteins on the cell membrane of the target cell. A virus must now enter the cell, which is covered by a phospholipid bilayer, a cell's natural barrier to the outside world. The process by which this barrier is breached depends upon the virus. Types of entry are:
Through the use of green fluorescent protein (GFP), virus entry and infection can be visualized in real-time. Once a virus enters a cell, replication is not immediate and indeed takes some time (seconds to hours). [ 3 ] [ 4 ]
The most well-known example is through membrane fusion. In a number of viruses with a viral envelope , viral receptors attach to the receptors on the surface of the cell and secondary receptors may be present to initiate the puncture of the membrane or fusion with the host cell. Following attachment, the viral envelope fuses with the host cell membrane, causing the virus to enter. Viruses that enter a cell in this manner included HIV , KSHV [ 5 ] [ 6 ] [ 7 ] [ 8 ] and herpes simplex virus . [ 9 ]
In SARS-CoV-2 and similar viruses, entry occurs through membrane fusion mediated by the spike protein , either at the cell surface or in vesicles. Research efforts have focused on the spike protein's interaction with its cell-surface receptor, angiotensin-converting enzyme 2 (ACE2). The evolved, high level of activity to mediate cell to cell fusion has resulted in an enhanced fusion capacity. [ 10 ] Current prophylaxis against SARS-2 infection targets the spike (S) proteins that harbor the capacity for membrane fusion. [ 11 ] Vaccinations are based on the blocking the viral S glycoprotein with the cell, thus stopping the fusion of the virus and its host cell membranes. [ 12 ] The fusion mechanism is also studied as a potential target for antiviral development. [ 13 ]
Viruses with no viral envelope enter the cell generally through endocytosis ; they “trick” the host cell to ingest the virions through the cell membrane. Cells can take in resources from the environment outside of the cell, and these mechanisms may be exploited by viruses to enter a cell in the same manner as ordinary resources. Once inside the cell, the virus leaves the host vesicle by which it was taken up and thus gains access to the cytoplasm. Examples of viruses that enter this way include the poliovirus , hepatitis C virus , [ 14 ] and foot-and-mouth disease virus . [ 15 ]
Many enveloped viruses, such as SARS-CoV-2 , also enter the cell through endocytosis. Entry via the endosome guarantees low pH and exposure to proteases which are needed to open the viral capsid and release the genetic material inside the host cytoplasm. Further, endosomes transport the virus through the cell and ensure that no trace of the virus is left on the surface, which could otherwise trigger immune recognition by the host. [ 16 ]
A third method is by simply attaching to the surface of the host cell via receptors on the cell with the virus injecting only its genome into the cell, leaving the rest of the virus on the surface. This is restricted to viruses in which only the genome is required for infection of a cell (for example positive-strand RNA viruses because they can be immediately translated) and is further restricted to viruses that actually exhibit this behavior. The best studied example includes the bacteriophages ; for example, when the tail fibers of the T2 phage land on a cell, its central sheath pierces the cell membrane and the phage injects DNA from the head capsid directly into the cell. [ 17 ]
Once a virus is in a cell, it will activate formation of proteins (either by itself or using the host’s machinery) to gain full control of the host cell, if possible. Control mechanisms include the suppression of intrinsic cell defenses, suppression of cell signaling and suppression of host cellular transcription and translation . Often, these cytotoxic effects lead to the death and decline of a cell infected by a virus.
A cell is classified as susceptible to a virus if the virus is able to enter the cell. After the introduction of the viral particle, unpacking of the contents (viral proteins in the tegument and the viral genome via some form of nucleic acid ) occurs as preparation of the next stage of viral infection: viral replication .
|
https://en.wikipedia.org/wiki/Viral_entry
|
The viral epitranscriptome includes all modifications to viral transcripts, studied by viral epitranscriptomics . Like the more general epitranscriptome , these modifications do not affect the sequence of the transcript , but rather have consequences on subsequent structures and functions.
The discovery of mRNA modifications dates back to 1957 with the discovery of the pseudouridine modification. [ 1 ] Many of these modifications were found in the noncoding regions of cellular RNA. Once these modifications were discovered in mRNA, discoveries in viral transcripts soon followed. [ 2 ] Detections have been aided with the advancement and use of new techniques such as m 6 A seq.
Viral RNA modifications use the same machinery as cellular RNA. This involves the use of "writer" and "reader" complexes. The writer complex contains the enzyme methyl transferase-like 3 (METTL3) and its cofactors like METTL14 , WTP, KIAA1492 and RBM15 /RBM15B which adds the m 6 A modification in the nucleus . [ 2 ] The family of proteins known as the YTH like YTHDC1 and YTHDC2 are capable of detecting these modifications within the nucleus. [ 3 ] In the cytoplasm , the reading duties are carried out by YTHDF1 , YTHDF2 , and YTHDF3 . [ 2 ] The proteins ALKBH5 and FTO remove the m 6 A modification, functionally serving as erasers, with the latter having a more restricted selectivity depending on the position of the modification. [ 2 ]
This modification involves the addition of a methyl group (-CH3) group to the 6th nitrogen on the adenine base in an mRNA molecule. This was among the first mRNA modifications to be discovered in 1974. [ 4 ] This modification is common in viral mRNA transcripts and is found in nearly 25% of them. [ 5 ] The distribution of the modification not uniform with some transcripts containing more than 10. [ 2 ] m 6 A modifications are a dynamic process with many applications ranging from viral interactions with cellular machinery and structural adjustments to viral life cycle control. Studies have shown different regulatory patterns for different viruses depending on the context. For single stranded RNA viruses , the effects of the modifications appear to differ on the basis of the viral family. In the HIV-1 genome , the single stranded positive sense RNA contains m 6 A modifications at multiple sites in both the untranslated and coding regions. [ 6 ] The presence of this modifications in the viral transcript is enough to increase corresponding modifications in host cell mRNA through binding interactions between the HIV-1 gp 120 envelope protein, and the CD4 receptor in T lymphocytes without causing a corresponding increase in. [ 5 ] [ 7 ] For HIV-1 and other RNA viral families like chikungunya , enteroviruses and influenza , studies show both a positive and negative role for m 6 A modifications on viral life replication and infection . [ 5 ] For other families, the role effects are clearer. For the flaviridae family, the modification had a negative role and hindered viral replication . [ 8 ] The modification in respiratory syncytial virus families showed a positive role and enhanced viral replication and infection. [ 5 ] The causes of these apparently different roles from different responses within the same family of viruses and why the viral families like flaviridae conserve m 6 A modifications when they negatively impact their cycles are currently unknown and under investigation. [ 5 ]
Most of the RNA viruses carry out their cycles in the cytoplasm, away from the required machinery for writing and erasing m 6 A modifications which are housed in the nucleus. [ 5 ] For DNA viruses, that cycle in the nucleus with direct access to said machinery, no clear general positive or negative regulatory role can be attributed to m 6 A modifications. In the simian virus and hepatitis B viruses , different m 6 A reading complexes were shown to have different roles in regulation with some having a conserved positive role and others having a neutral or negative effect on replication.
This modification involves the addition of a methyl group to the 2' hydroxyl (-OH) group of the ribose sugar of RNA molecules. [ 9 ] In contrast with the m 6 A modification, it is the ribose sugar, a part of the backbone rather than the base that is altered. It is present in various kinds of cellular RNA, providing coding and structural support. 2-O-methylation of viral RNA is often accompanied by the addition of an inverted N-7methylguanosine to the 5' end on the phosphate group. [ 10 ] These modifications regulate important functions of viral RNA such as metabolism and immune system interactions.
Different viruses have their mechanisms for acquiring this modification. Cytoplasmic RNA viruses like flaviridae and coronaviruses encode the required to catalyze cap formation reactions, with some needing one enzyme for the 5' cap and 2-O-methylation while others require two enzymes like poxviruses. [ 11 ] Others, like influenza virus can hijack the methylguanosine caps from host cell mRNA and be preferentially translated. [ 10 ]
One viral epitranscriptome modification that has been identified is the 5-methylcytidine (m 5 C). HIV-1 and MLV transcriptomes contain elevated levels of these residues by approximately 14-30 fold when compared to a cell's normal levels. NSUN2 is the complex that codes the cytidine methyltransferases credited with m 5 C formation in cells and amplification in viral epitranscriptomes. The NSUN2 affects the translational aspect of the mRNA in the viral cells, boosting the expression of the viral genome. [ 12 ] It has also been found that the m 5 C alters the splicing pattern and locations in the viral transcriptome. This affected the HIV-1 transcript in both early and late infection. [ 13 ]
Viral RNA modifications play important roles in interactions with the immune system of host cells. The m 6 A modification of viral RNAs allows for the viruses to escape recognition by the retinoic acid inducible gene-I receptor (RIG-I), in the type 1 IFN response, a crucial pathway of innate immunity. [ 5 ] 5' N-7methylguanisone capping and 2-O-methylation also play vital roles for the viral infections. The cap structures help viral RNA to blend in among modified cellular mRNA and avoid triggering immune response systems.
|
https://en.wikipedia.org/wiki/Viral_epitranscriptome
|
Viral eukaryogenesis is the hypothesis that the cell nucleus of eukaryotic life forms evolved from a large DNA virus in a form of endosymbiosis within a methanogenic archaeon or a bacterium . The virus later evolved into the eukaryotic nucleus by acquiring genes from the host genome and eventually usurping its role. The hypothesis was first proposed by Philip Bell in 2001 [ 1 ] and was further popularized with the discovery of large, complex DNA viruses (such as Mimivirus ) that are capable of protein biosynthesis .
Viral eukaryogenesis has been controversial for several reasons. For one, it is sometimes argued that the posited evidence for the viral origins of the nucleus can be conversely used to suggest the nuclear origins of some viruses. [ 2 ] Secondly, this hypothesis has further inflamed the longstanding debate over whether viruses are living organisms . [ 2 ]
The viral eukaryogenesis hypothesis posits that eukaryotes are composed of three ancestral elements: a viral component that became the modern nucleus; a prokaryotic cell (an archaeon according to the eocyte hypothesis ) which donated the cytoplasm and cell membrane of modern cells; and another prokaryotic cell (here bacterium ) that, by endocytosis , became the modern mitochondrion or chloroplast .
In 2006, researchers suggested that the transition from RNA to DNA genomes first occurred in the viral world. [ 3 ] A DNA-based virus may have provided storage for an ancient host that had previously used RNA to store its genetic information (such host is called ribocell or ribocyte). [ 2 ] Viruses may initially have adopted DNA as a way to resist RNA-degrading enzymes in the host cells. Hence, the contribution from such a new component may have been as significant as the contribution from chloroplasts or mitochondria . Following this hypothesis, archaea, bacteria , and eukaryotes each obtained their DNA informational system from a different virus. [ 3 ] In the original paper it was also an RNA cell at the origin of eukaryotes, but eventually more complex, featuring RNA processing . Although this is in contrast to nowadays's more probable eocyte hypothesis, viruses seem to have contributed to the origin of all three domains of life ('out of virus hypothesis'). It has also been suggested that telomerase and telomeres , key aspects of eukaryotic cell replication , have viral origins. Further, the viral origins of the modern eukaryotic nucleus may have relied on multiple infections of archaeal cells carrying bacterial mitochondrial precursors with lysogenic viruses . [ 4 ]
The viral eukaryogenesis hypothesis depicts a model of eukaryotic evolution in which a virus, similar to a modern pox virus , evolved into a nucleus via gene acquisition from existing bacterial and archaeal species. [ 5 ] The lysogenic virus then became the information storage center for the cell, while the cell retained its capacities for gene translation and general function despite the viral genome's entry. Similarly, the bacterial species involved in this eukaryogenesis retained its capacity to produce energy in the form of ATP while also passing much of its genetic information into this new virus-nucleus organelle . It is hypothesized that the modern cell cycle , whereby mitosis , meiosis , and sex occur in all eukaryotes, evolved because of the balances struck by viruses, which characteristically follow a pattern of tradeoff between infecting as many hosts as possible and killing an individual host through viral proliferation. Hypothetically, viral replication cycles may mirror those of plasmids and viral lysogens . However, this theory is controversial, and additional experimentation involving archaeal viruses is necessary, as they are probably the most evolutionarily similar to modern eukaryotic nuclei. [ 6 ] [ 7 ]
The viral eukaryogenesis hypothesis points to the cell cycle of eukaryotes, particularly sex and meiosis, as evidence. [ 6 ] Little is known about the origins of DNA or reproduction in prokaryotic or eukaryotic cells. It is thus possible that viruses were involved in the creation of Earth's first cells. [ 8 ] The eukaryotic nucleus contains linear DNA with specialized end sequences, like that of viruses (and in contrast to bacterial genomes, which have a circular topology); it uses mRNA capping , and separates transcription from translation . Eukaryotic nuclei are also capable of cytoplasmic replication. Some large viruses have their own DNA-directed RNA polymerase . [ 2 ] Transfers of "infectious" nuclei have been documented in many parasitic red algae . [ 9 ]
Recent supporting evidence includes the discovery that upon the infection of a bacterial cell , the giant bacteriophage 201 Φ2-1 (of the genus Phikzvirus ) assembles a nucleus-like structure around the region of genome replication and uncouples transcription and translation, and synthesized mRNA is then transported into the cytoplasm where it undergoes translation. [ 10 ] The same researchers also found that this same phage encodes a eukaryotic homologue to tubulin ( PhuZ ) that plays the role of positioning the viral factory in the center of the cell during genome replication. [ 11 ] The PhuZ spindle shares several unique properties with eukaryotic spindles: dynamic instability, bipolar filament arrays, and centrally positioning DNA. [ 7 ] Further, many classes of nucleocytoplasmic large DNA viruses (NCLDVs) such as mimiviruses have the apparatus to produce m7G capped mRNA and contain homologues of the eukaryotic cap-binding protein eIF4E. Those supporting viral eukaryogenesis also point to the lack of these features in archaea, and so believe that a sizable gap separates the archaeal groups most related to the eukaryotes and the eukaryotes themselves in terms of the nucleus. In light of these and other discoveries, Bell modified his original thesis to suggest that the viral ancestor of the nucleus was an NCLDV-like archaeal virus rather than a pox-like virus. [ 7 ] Another piece of supporting evidence is that the m7G capping apparatus (involved in uncoupling of transcription from translation) is present in both Eukarya and Mimiviridae but not in Lokiarchaeota that are considered the nearest archaeal relatives of Eukarya according to the Eocyte hypothesis (also supported by the phylogenetic analysis of the m7G capping pathway). [ 7 ]
Several precepts in the theory are possible. For instance, a helical virus with a bilipid envelope bears a distinct resemblance to a highly simplified cellular nucleus (i.e., a DNA chromosome encapsulated within a lipid membrane). In theory, a large DNA virus could take control of a bacterial or archaeal cell. Instead of replicating and destroying the host cell , it would remain within the cell, thus overcoming the tradeoff dilemma typically faced by viruses. With the virus in control of the host cell's molecular machinery, it would effectively become a functional nucleus. Through the processes of mitosis and cytokinesis , the virus would thus recruit the entire cell as a symbiont—a new way to survive and proliferate. [ 12 ]
|
https://en.wikipedia.org/wiki/Viral_eukaryogenesis
|
Viral evolution is a subfield of evolutionary biology and virology concerned with the evolution of viruses . [ 1 ] [ 2 ] Viruses have short generation times, and many—in particular RNA viruses —have relatively high mutation rates (on the order of one point mutation or more per genome per round of replication). Although most viral mutations confer no benefit and often even prove deleterious to viruses, the rapid rate of viral mutation combined with natural selection allows viruses to quickly adapt to changes in their host environment. In addition, because viruses typically produce many copies in an infected host, mutated genes can be passed on to many offspring quickly. Although the chance of mutations and evolution can change depending on the type of virus (e.g., double stranded DNA , double stranded RNA , or single stranded DNA ), viruses overall have high chances for mutations.
Viral evolution is an important aspect of the epidemiology of viral diseases such as influenza ( influenza virus ), AIDS ( HIV ), and hepatitis (e.g. HCV ). The rapidity of viral mutation also causes problems in the development of successful vaccines and antiviral drugs , as resistant mutations often appear within weeks or months after the beginning of a treatment. One of the main theoretical models applied to viral evolution is the quasispecies model , which defines a viral quasispecies as a group of closely related viral strains competing within an environment.
Studies at the molecular level have revealed relationships between viruses infecting organisms from each of the three domains of life , suggesting viral proteins that pre-date the divergence of life and thus infecting the last universal common ancestor . [ 3 ] This indicates that some viruses emerged early in the evolution of life, [ 4 ] and that they have probably arisen multiple times. [ 5 ] It has been suggested that new groups of viruses have repeatedly emerged at all stages of evolution, often through the displacement of ancestral structural and genome replication genes. [ 6 ]
There are three main classical hypotheses [ 7 ] that aim to explain the origins of viruses:
One of the problems for studying viral origins and evolution is the high rate of viral mutation, particularly the case in RNA retroviruses like HIV/AIDS. A recent study based on comparisons of viral protein folding structures, however, is offering some new evidence. Fold Super Families (FSFs) are proteins that show similar folding structures independent of the actual sequence of amino acids, and have been found to show evidence of viral phylogeny . The proteome of a virus, the viral proteome , still contains traces of ancient evolutionary history that can be studied today. The study of protein FSFs suggests the existence of ancient cellular lineages common to both cells and viruses before the appearance of the 'last universal cellular ancestor' that gave rise to modern cells. Evolutionary pressure to reduce genome and particle size may have eventually reduced viro-cells into modern viruses, whereas other coexisting cellular lineages eventually evolved into modern cells. [ 15 ] Furthermore, the long genetic distance between RNA and DNA FSFs suggests that the RNA world hypothesis may have new experimental evidence, with a long intermediary period in the evolution of cellular life.
Definitive exclusion of a hypothesis on the origin of viruses is difficult to make on Earth given the ubiquitous interactions between viruses and cells, and the lack of availability of rocks that are old enough to reveal traces of the earliest viruses on the planet. From an astrobiological perspective, it has therefore been proposed that on celestial bodies such as Mars not only cells but also traces of former virions or viroids should be actively searched for: possible findings of traces of virions in the apparent absence of cells could provide support for the virus-first hypothesis. [ 16 ]
Viruses do not form fossils in the traditional sense as they are much smaller than the finest colloidal fragments forming sedimentary rocks that fossilize plants and animals. However, the genomes of many organisms contain endogenous viral elements (EVEs). These DNA sequences are the remnants of ancient virus genes and genomes that ancestrally 'invaded' the host germline . For example, the genomes of most vertebrate species contain hundreds to thousands of sequences derived from ancient retroviruses . These sequences are a valuable source of retrospective evidence about the evolutionary history of viruses, and have given birth to the science of paleovirology . [ 17 ]
The evolutionary history of viruses can to some extent be inferred from analysis of contemporary viral genomes. The mutation rates for many viruses have been measured, and application of a molecular clock allows dates of divergence to be inferred. [ 18 ]
Viruses evolve through changes in their RNA (or DNA), some quite rapidly, and the best adapted mutants quickly outnumber their less fit counterparts. In this sense their evolution is Darwinian . [ 19 ] The way viruses reproduce in their host cells makes them particularly susceptible to the genetic changes that help to drive their evolution. [ 20 ] The RNA viruses are especially prone to mutations. [ 21 ] In host cells there are mechanisms for correcting mistakes when DNA replicates and these kick in whenever cells divide. [ 21 ] These important mechanisms prevent potentially lethal mutations from being passed on to offspring. But these mechanisms do not work for RNA and when an RNA virus replicates in its host cell, changes in their genes are occasionally introduced in error, some of which are lethal. One virus particle can produce millions of progeny viruses in just one cycle of replication, therefore the production of a few "dud" viruses is not a problem. Most mutations are "silent" and do not result in any obvious changes to the progeny viruses, but others confer advantages that increase the fitness of the viruses in the environment. These could be changes to the virus particles that disguise them so they are not identified by the cells of the immune system or changes that make antiviral drugs less effective. Both of these changes occur frequently with HIV . [ 22 ]
Many viruses (for example, influenza A virus) can "shuffle" their genes with other viruses when two similar strains infect the same cell. This phenomenon is called genetic shift , and is often the cause of new and more virulent strains appearing. Other viruses change more slowly as mutations in their genes gradually accumulate over time, a process known as antigenic drift . [ 24 ]
Through these mechanisms new viruses are constantly emerging and present a continuing challenge in attempts to control the diseases they cause. [ 25 ] [ 26 ] Most species of viruses are now known to have common ancestors, and although the "virus first" hypothesis has yet to gain full acceptance, there is little doubt that the thousands of species of modern viruses have evolved from less numerous ancient ones. [ 27 ] The morbilliviruses , for example, are a group of closely related, but distinct viruses that infect a broad range of animals. The group includes measles virus, which infects humans and primates; canine distemper virus , which infects many animals including dogs, cats, bears, weasels and hyaenas; rinderpest , which infected cattle and buffalo; and other viruses of seals, porpoises and dolphins. [ 28 ] Although it is not possible to prove which of these rapidly evolving viruses is the earliest, for such a closely related group of viruses to be found in such diverse hosts suggests the possibility that their common ancestor is ancient. [ 29 ]
Escherichia virus T4 (phage T4) is a species of bacteriophage that infects Escherichia coli bacteria. It is a double-stranded DNA virus in the family Myoviridae . Phage T4 is an obligate intracellular parasite that reproduces within the host bacterial cell and its progeny are released when the host is destroyed by lysis . The complete genome sequence of phage T4 encodes about 300 gene products . [ 30 ] These virulent viruses are among the largest, most complex viruses that are known and one of the best studied model organisms . They have played a key role in the development of virology and molecular biology . The numbers of reported genetic homologies between phage T4 and bacteria and between phage T4 and eukaryotes are similar suggesting that phage T4 shares ancestry with both bacteria and eukaryotes and has about equal similarity to each. [ 31 ] Phage T4 may have diverged in evolution from a common ancestor of bacteria and eukaryotes or from an early evolved member of either lineage. Most of the phage genes showing homology with bacteria and eukaryotes encode enzymes acting in the ubiquitous processes of DNA replication , DNA repair , recombination and nucleotide synthesis. [ 31 ] These processes likely evolved very early. The adaptive features of the enzymes catalyzing these early processes may have been maintained in the phage T4, bacterial, and eukaryotic lineages because they were established well-tested solutions to basic functional problems by the time these lineages diverged.
Viruses have been able to continue their infectious existence due to evolution. Their rapid mutation rates and natural selection has given viruses the advantage to continue to spread. One way that viruses have been able to spread is with the evolution of virus transmission . The virus can find a new host through: [ 32 ]
Virulence , or the harm that the virus does on its host, depends on various factors. In particular, the method of transmission tends to affect how the level of virulence will change over time. Viruses that transmit through vertical transmission (transmission to the offspring of the host) will evolve to have lower levels of virulence. Viruses that transmit through horizontal transmission (transmission between members of the same species that don't have a parent-child relationship) will usually evolve to have a higher virulence. [ 37 ]
[ 9 ] [ 8 ] [ 38 ] [ 11 ]
|
https://en.wikipedia.org/wiki/Viral_evolution
|
Viruses are only able to replicate themselves by commandeering the reproductive apparatus of cells and making them reproduce the virus's genetic structure and particles instead. How viruses do this depends mainly on the type of nucleic acid DNA or RNA they contain, which is either one or the other but never both. Viruses cannot function or reproduce outside a cell, and are totally dependent on a host cell to survive. Most viruses are species specific, and related viruses typically only infect a narrow range of plants, animals, bacteria, or fungi. [ 1 ]
For the virus to reproduce and thereby establish infection, it must enter cells of the host organism and use those cells' materials. To enter the cells, proteins on the surface of the virus interact with proteins of the cell. Attachment, or adsorption, occurs between the viral particle and the host cell membrane. A hole forms in the cell membrane, then the virus particle or its genetic contents are released into the host cell, where replication of the viral genome may commence.
Next, a virus must take control of the host cell's replication mechanisms . It is at this stage a distinction between susceptibility and permissibility of a host cell is made. Permissibility determines the outcome of the infection. After control is established and the environment is set for the virus to begin making copies of itself, replication occurs quickly by the millions.
After a virus has made many copies of itself, the progeny may begin to leave the cell by several methods. This is called shedding and is the final stage in the viral life cycle.
Some viruses can "hide" within a cell, which may mean that they evade the host cell defenses or immune system and may increase the long-term "success" of the virus. This hiding is deemed latency. During this time, the virus does not produce any progeny, it remains inactive until external stimuli —such as light or stress—prompts it to activate.
|
https://en.wikipedia.org/wiki/Viral_life_cycle
|
Viral metagenomics uses metagenomic technologies to detect viral genomic material from diverse environmental and clinical samples. [ 1 ] [ 2 ] Viruses are the most abundant biological entity and are extremely diverse; however, only a small fraction of viruses have been sequenced and only an even smaller fraction have been isolated and cultured. [ 1 ] [ 3 ] Sequencing viruses can be challenging because viruses lack a universally conserved marker gene so gene-based approaches are limited. [ 3 ] [ 4 ] Metagenomics can be used to study and analyze unculturable viruses and has been an important tool in understanding viral diversity and abundance and in the discovery of novel viruses. [ 1 ] [ 5 ] [ 6 ] For example, metagenomics methods have been used to describe viruses associated with cancerous tumors and in terrestrial ecosystems. [ 7 ]
The traditional methods for discovering, characterizing, and assigning viral taxonomy to viruses were based on isolating the virus particle or its nucleic acid from samples. [ 8 ] The virus morphology could be visualized using electron microscopy but only if the virus could be isolated in high enough titer to be detected. The virus could be cultured in eukaryotic cell lines or bacteria but only if the appropriate host cell type was known and the nucleic acid of the virus would be detected using PCR but only if a consensus primer was known. [ 8 ]
Metagenomics requires no prior knowledge of the viral genome as it does not require a universal marker gene, a primer or probe design. [ 9 ] Because this method uses prediction tools to detect viral content of a sample, it can be used to identify new virus species or divergent members of known species.
The earliest metagenomic studies of viruses were carried out on ocean samples in 2002. The sequences that were matched to referenced sequences were predominantly double-stranded DNA bacteriophages and double-stranded algal viruses. [ 10 ]
In 2016 the International Committee on Taxonomy of Viruses (ICTV) officially recognized that viral genomes assembled from metagenomic data can be classified using the same procedures for viruses isolated via classical virology approaches. [ 11 ]
In the 2002 metagenomics study the researchers found that 65% of the sequences of DNA and RNA viruses had no matches in the reference databases. [ 10 ] This phenomenon of unmatched viral sequences in sequence reference databases is prevalent in viral metagenomics studies and is referred to as “viral dark matter". [ 3 ] [ 8 ] It is predominantly caused by the lack of complete viral genome sequences of diverse samples in reference databases and the rapid rate of viral evolution. [ 3 ] [ 8 ]
Adding to these challenges, there are seven classes of viruses based on the Baltimore classification system which groups viruses based on their genomic structure and their manner of transcription: there are double-stranded DNA viruses, single-stranded DNA viruses, double-stranded RNA viruses, and single-stranded RNA virus. [ 12 ] Single-stranded RNA can be positive or negative sense. These different nucleic acids types need different sequencing approaches and there is no universal gene marker that is conserved for all virus types. [ 3 ] [ 4 ] Gene-based approaches can only target specific groups of viruses (such as RNA viruses that share a conserved RNA polymerase sequence). [ 3 ] [ 4 ]
There is still a bias towards DNA viruses in reference databases. Common reasons for this bias is because RNA viruses mutate more rapidly than DNA viruses, DNA is easier to handle from samples while RNA is unstable, and more steps are needed for RNA metagenomics analysis (reverse transcription). [ 4 ] [ 8 ]
Sequences can be contaminated with the host organism's' sequences which is particularly troublesome if the host organism of the virus is unknown. [ 4 ] There could also be contamination from nucleic acid extraction and PCR. [ 4 ]
Metagenomic analysis uses whole genome shotgun sequencing to characterize microbial diversity in clinical and environmental samples. Total DNA and/or RNA are extracted from the samples and are prepared on a DNA or RNA library for sequencing. [ 9 ] These methods have been used to sequence the whole genome of Epstein–Barr virus (EBV) and HCV , however, contaminating host nucleic acids can affect the sensitivity to the target viral genome with the proportion of reads related to the target sequence often being low. [ 13 ] [ 14 ]
The IMG/VR system and the IMG/VR v.2.0 are the largest interactive public virus databases with over 760,000 metagenomic viral sequences and isolate viruses and serves as a starting point for the sequence analysis of viral fragments derived from metagenomic samples. [ 15 ] [ 16 ]
While untargeted metagenomics and metatranscriptomics does not need a genetic marker, amplicon sequencing does. It uses a gene that is highly conserved as a genetic marker, but because of the varied nucleic acid types, the marker used has to be for specific groups of viruses. [ 3 ] [ 4 ] This is done via PCR amplification of primers that are complementary to a known, highly conserved nucleotide sequence. [ 9 ] PCR is then followed by whole genome sequencing methods and has been used to track the Ebola virus , [ 17 ] Zika Virus , [ 18 ] and COVID-19 [ 19 ] epidemics. PCR amplicon sequencing is more successful for whole genome sequencing of samples with low concentrations. However, with larger viral genomes and the heterogeneity of RNA viruses multiple overlapping primers may be required to cover the amplification of all genotypes. PCR amplicon sequencing requires knowledge of the viral genome prior to sequencing, appropriate primers, and is highly dependent on viral titers, however, PCR amplicon sequencing is a cheaper evaluation method than metagenomic sequencing when studying known viruses with relatively small genomes. [ 9 ]
Target enrichment is a culture independent method that sequences viral genomes directly from a sample using small RNA or DNA probes complementary to the pathogens reference sequence. The probes, which can be bound to a solid phase and capture and pull down complementary DNA sequences in the sample. [ 9 ] The presence of overlapping probes increases the tolerance for primer mismatches but their design requires high cost and time so a rapid response is limited. DNA capture is followed by brief PCR cycling and shotgun sequencing. Success of this method is dependent available reference sequences to create the probes and is not suitable for characterization of novel viruses. [ 9 ] This method has been used to characterize large and small viruses such as HCV , [ 14 ] HSV-1 , [ 20 ] and HCMV . [ 21 ]
Viral metagenomics methods can produce erroneous chimerical sequences. [ 22 ] [ 23 ] These can include in vitro artifacts from amplification and in silico artifacts from assembly. [ 23 ] Chimeras can form between unrelated viruses, as well as between viral and eukaryotic sequences. [ 23 ] The likelihood of errors is partially mitigated by greater sequencing depth, but chimeras can still form in areas of high coverage if the reads are highly fragmented. [ 22 ]
Plant viruses pose a global threat to crop production but through metagenomic sequencing and viral database creation, modified plant viruses can be used to aid in plant immunity as well as alter physical appearance. [ 24 ] Data obtained on plant virus genomes from metagenomic sequencing can be used to create clone viruses to inoculate the plant with to study viral components and biological characterization of viral agents with increased reproducibility. Engineered mutant virus strains have been used to alter the coloration and size of various ornamental plants and promote the health of crops. [ 25 ]
Viral metagenomics contributes to viral classification without the need of culture based methodologies and has provided vast insights on viral diversity in any system. Metagenomics can be used to study viruses effects on a given ecosystem and how they effect the microbiome as well as monitoring viruses in an ecosystem for possible spillover into human populations. [ 1 ] Within the ecosystems, viruses can be studied to determine how they compete with each other as well as viral effects on functions of the host. Viral metagenomics has been used to study unculturable viral communities in marine and soil ecosystems. [ 7 ] [ 26 ]
Viral metagenomics is readily used to discover novel viruses, with a major focus on those zoonotic or pathogenic to humans. Viral databases obtained from metagenomics provides quick response methods to determine viral infections as well as determine drug resistant variants in clinical samples. [ 9 ] The contributions of viral metagenomics to viral classification have aided pandemic surveillance efforts as well as made infectious disease surveillance and testing more affordable. [ 27 ] Since the majority of human pandemics are zoonotic in origin, metagenomic surveillance can provide faster identification of novel viruses and their reservoirs. [ 28 ]
One such surveillance program is the Global Virome Project (GVP) an international collaborative research initiative based at the One Health Institute at the University of California, Davis . [ 29 ] [ 30 ] The GVP aims to boost infectious disease surveillance around the globe by using low cost sequencing methods in high risk countries to prevent disease outbreaks and to prevent future virus outbreaks. [ 27 ] [ 31 ]
Viral metagenomics has been used to test for virus related cancers and difficult to diagnose cases in clinical diagnostics. [ 31 ] This method is most often used when conventional and advanced molecular testing cannot find a causative agent for disease. Metagenomic sequencing can also be used to detect pathogenic viruses in clinical samples and provide real time data for a pathogens presence in a population. [ 28 ]
The methods used for clinical viral metagenomics are not standardized, but guidelines have been published by the European Society for Clinical Virology . [ 32 ] [ 33 ] A mixture of different sequencing platforms are used for clinical viral metagenomics, the most common being instruments from Illumina and Oxford Nanopore Technologies . There are also several different protocols, both for wet lab work and for bioinformatic analysis, that are in use. [ 34 ]
|
https://en.wikipedia.org/wiki/Viral_metagenomics
|
Viral neuronal tracing is the use of a virus to trace neural pathways , providing a self-replicating tracer . Viruses have the advantage of self-replication over molecular tracers but can also spread too quickly and cause degradation of neural tissue. Viruses that can infect the nervous system, called neurotropic viruses , spread through spatially close assemblies of neurons through synapses , allowing for their use in studying functionally connected neural networks. [ 1 ] [ 2 ] [ 3 ]
The use of viruses to label functionally connected neurons stems from the work and bioassay developed by Albert Sabin . [ 4 ] Subsequent research allowed for the incorporation of immunohistochemical techniques to systematically label neuronal connections. [ 4 ] To date, viruses have been used to study multiple circuits in the nervous system.
The individual connections of neurons have long evaded neuroanatomists . [ 5 ] Neuronal tracing methods offer an unprecedented view into the morphology and connectivity of neural networks. Depending on the tracer used, this can be limited to a single neuron or can progress trans-synoptically to adjacent neurons. After the tracer has spread sufficiently, the extent may be measured either by fluorescence (for dyes) or by immunohistochemistry (for biological tracers).
An important innovation in this field is the use of neurotropic viruses as tracers. These not only spread throughout the initial site of infection but can jump across synapses . [ citation needed ]
The life cycle of viruses, such as those used in neuronal tracing, is different from cellular organisms . Viruses are parasitic in nature and cannot proliferate on their own. Therefore, they must infect another organism and effectively hijack cellular machinery to complete their life cycle.
The first stage of the viral life cycle is called viral entry . This defines the manner in which a virus infects a new host cell. In nature, neurotropic viruses are usually transmitted through bites or scratches, as in the case of the rabies virus or certain strains of herpes viruses . In tracing studies, this step occurs artificially, typically through the use of a syringe. The next stage of the viral life cycle is called viral replication . During this stage, the virus takes over the host cell's machinery to cause the cell to create more viral proteins and assemble more viruses.
Once the cell has produced a sufficient number of viruses, the virus enters the viral shedding stage. During this stage, viruses leave the original host cell in search of a new host. In the case of neurotropic viruses, this transmission typically occurs at the synapse . Viruses can jump across a relatively short space from one neuron to the next. This trait is what makes viruses so useful in tracer studies. [ citation needed ]
Once the virus enters the next cell, the cycle begins anew. The original host cell begins to degrade after the shedding stage. In tracer studies, this is the reason the timing must be tightly controlled. If the virus is allowed to spread too far, the original microcircuitry of interest is degraded, and no useful information can be retrieved. Typically, viruses can infect only a small number of organisms, and even then, only a specific cell type within the body. The specificity of a particular virus for a specific tissue is known as its tropism . Viruses in tracer studies are all neurotropic (capable of infecting neurons). [ 6 ]
The viral tracer may be introduced in peripheral organs, such as a muscle or gland . [ 7 ] Certain viruses, such as adeno-associated virus , can be injected into the bloodstream and can cross the blood–brain barrier to infect the brain. [ 8 ] It may also be introduced into a ganglion or injected directly into the brain using a stereotactic device . These methods offer unique insight into how the brain and its periphery are connected.
Viruses are introduced into neuronal tissue in many different ways. There are two major methods to introduce tracers into the target tissues.
Once the tracer is introduced into the cell, the aforementioned transport mechanisms take over. Then, the virus starts to infect cells in the local area once it enters the nervous system. The viruses function by incorporating their own genetic material into the genome of the infected cells. [ 10 ] The host cell will then produce the proteins encoded by the gene. Researchers are able to incorporate numerous genes into the infected neurons, including fluorescent proteins used for visualization. [ 10 ] Further advances in neuronal tracing allow for the targeted expression of fluorescent proteins to specific cell types. [ 10 ]
Once the virus has spread to the desired extent, the brain is sliced and mounted on slides. Then, fluorescent antibodies that are either specific for the virus, or fluorescent complementary DNA probes for viral DNA, are washed over the slices and imaged under a fluorescence microscope . [ 9 ]
Virus transmission relies on the mechanism of axoplasmic transport . Within the axon are long slender protein complexes called microtubules . They act as a cytoskeleton to help the cell maintain its shape. These can also act as highways within the axon and facilitate the transport of neurotransmitter -filled vesicles and enzymes back and forth between the cell body, or soma and the axon terminal, or synapse .
Viruses can be transported in one of two directions: either anterograde (from soma to synapse), or retrograde (from synapse to soma). Neurons naturally transport proteins , neurotransmitters , and other macromolecules via these cellular pathways. Neuronal tracers, including viruses, take advantage of these transport mechanisms to distribute a tracer throughout a cell. Researchers can use this to study synaptic circuitry.
Anterograde tracing is the use of a tracer that moves from soma to synapse. Anterograde transport uses a protein called kinesin to move viruses along the axon in the anterograde direction. [ 9 ]
Retrograde tracing is the use of a tracer that moves from synapse to soma. Retrograde transport uses a protein called dynein to move viruses along the axon in the retrograde direction. [ 9 ] [ 11 ] It is important to note that different tracers show characteristic affinities for dynein and kinesin, and so will spread at different rates.
At times, it is desirable to trace neurons upstream and downstream to determine both the inputs and the outputs of neural circuitry. This uses a combination of the above mechanisms. [ 12 ]
One of the benefits of using viral tracers is the ability of the virus to jump across synapses. This allows for the tracing of microcircuitry as well as projection studies. Few molecular tracers are able to do this, and those that can usually have a decreased signal in secondary neurons, which leads to the other benefit of viral tracing - viruses can self-replicate. As soon as the secondary neuron is infected, the virus begins multiplying and replicating. There is no loss of signal as the tracer propagates through the brain. [ 6 ]
As viruses propagate through the nervous system, the viral tracers infect neurons and ultimately destroy them. As such, the timing of tracer studies must be precise to allow adequate propagation before neural death occurs, causing large-scale harm to the body. [ 13 ]
It has been difficult to find viruses perfectly suited for the task. A virus used for tracing should ideally be just mildly infectious to give good results, but not deadly as to destroy neural tissue too quickly or pose unnecessary risks to those exposed.
Another drawback is that viral neuronal tracing currently requires the additional step of attaching fluorescent antibodies to the viruses to visualize the path. In contrast, most molecular tracers are brightly colored and can be viewed with the naked eye, without additional modification.
Viral tracing is primarily used to trace neuronal circuits. Researchers use one of the previously mentioned viruses to study how neurons in the brain are connected to each other with a very fine level of detail. [ 14 ] Connectivity largely determines how the brain functions. Viruses have been used to study retinal ganglion circuits, [ 15 ] cortical circuits, [ 16 ] and spinal circuits, among others.
The following is a list of viruses currently in use for the purpose of neuronal tracing.
|
https://en.wikipedia.org/wiki/Viral_neuronal_tracing
|
Viral replication is the formation of biological viruses during the infection process in the target host cells. Viruses must first get into the cell before viral replication can occur. Through the generation of abundant copies of its genome and packaging these copies, the virus continues infecting new hosts. Replication between viruses is greatly varied and depends on the type of genes involved in them. Most DNA viruses assemble in the nucleus while most RNA viruses develop solely in cytoplasm. [ 1 ]
Viruses multiply only in living cells. The host cell must provide the energy and synthetic machinery and the low-molecular-weight precursors for the synthesis of viral proteins and nucleic acids. [ 2 ]
Virus replication occurs in seven stages:
It is the first step of viral replication. Some viruses attach to the cell membrane of the host cell and inject its DNA or RNA into the host to initiate infection. Attachment to a host cell is often achieved by a virus attachment protein that extends from the protein shell ( capsid ), of a virus. This protein is responsible for binding to a surface receptor on the plasma membrane (or membrane carbohydrates) of a host cell. Viruses can exploit normal cell receptor functions to allow attachment to occur by mimicking molecules that bind to host cell receptors. For example, the rhinovirus uses their virus attachment protein to bind to the receptor ICAM-1 on host cells that is normally used to facilitate adhesion between other host cells. [ 3 ]
Entry, or penetration, is the second step in viral replication. This step is characterized by the virus passing through the plasma membrane of the host cell. The most common way a virus gains entry to the host cell is by receptor-mediated endocytosis , which comes at no energy cost to the virus, only the host cell. Receptor-mediated endocytosis occurs when a molecule (in this case a virus) binds to receptor on the membrane of the cell. A series of chemical signals from this binding causes the cell to wrap the attached virus in the plasma membrane around it forming a virus-containing vesicle inside the cell. [ 3 ]
Viruses enter host cells using a variety of mechanisms, including the endocytic and non-endocytic routes. [ 4 ] They can also fuse at the plasma membrane and can spread within the host via fusion or cell-cell fusion. [ 5 ] Viruses attach to proteins on the host cell surface known as cellular receptors or attachment factors to aid entry. [ 6 ] Evidence shows that viruses utilize ion channels on the host cells during viral entry.
Fusion: External viral proteins promote the fusion of the virion with the plasma membrane. [ 7 ] This forms a pore in the host membrane, and after entry, the virion becomes uncoated, and its genomic material is then transferred into the cytoplasm. [ 8 ] Cell-to-cell fusion: Some viruses prompt specific protein expression on the surfaces of infected cells to attract uninfected cells. [ 9 ] This interaction causes the uninfected cell to fuse with the infected cell at lower pH levels to form a multinuclear cell known as a syncytium. [ 10 ] Endocytic routes: the process by which an intracellular vesicle is formed by membrane invagination, which results in the engulfment of extracellular and membrane-bound components, in this context, a virus. [ 11 ] Non-endocytic routes: the process by which viral particles are released into the cell by fusion of the extracellular viral envelope and the membrane of the host cell. [ 4 ]
Uncoating is the third step in viral replication. Uncoating is defined by the removal of the virion's protein "coat" and the release of its genetic material. This step occurs in the same area that viral transcription occurs. Different viruses have various mechanisms for uncoating. Some RNA viruses such as Rhinoviruses use the low pH in a host cell's endosomes to activate their uncoating mechanism. This involves the rhinovirus releasing a protein that creates holes in the endosome, and allows the virus to release its genome through the holes. Many DNA viruses travel to the host cells nucleus and release their genetic material through nuclear pores. [ 3 ]
The fourth step in the viral cycle is replication, which is defined by the rapid production of the viral genome. How a virus undergoes replication relies on the type of genetic material the virus possesses. Based on their genetic material, viruses will hijack the corresponding cellular machinery for said genetic material. Viruses that contain double-stranded DNA (dsDNA) share the same kind of genetic material as all organisms, and can therefore use the replication enzymes in the host cell nucleus to replicate the viral genome. Many RNA viruses typically replicate in the cytosol , and can directly access the host cell's ribosomes to manufacture viral proteins once the RNA is in a replicative form.
Viruses may undergo two types of life cycles: the lytic cycle and the lysogenic cycle. In the lytic cycle, the virus introduces its genome into a host cell and initiates replication by hijacking the host's cellular machinery to make new copies of the virus. [ 12 ] In the lysogenic life cycle, the viral genome is incorporated into the host genome. The host genome will undergo its normal life cycle, replicating and dividing replicating the viral genome along with its own. [ 13 ] The viral genome can be triggered to begin viral production via chemical and environmental stimulants. [ 14 ] Once a lysogenic virus enters the lytic life cycle, it will continue in the viral production pathways and proceed with transcription / mRNA production. (ex: Cold sores, herpes simplex virus (HSV)-1, lysogenic bacteriophages, etc.)
Assembly is when the newly manufactured viral proteins and genomes are gathered and put together to form immature viruses. Like the other steps, how a particular virus is assembled is dependent on what type of virus it is. Assembly can occur in the plasma membrane, cytosol, nucleus, golgi apparatus, and other locations within the host cell. Some viruses only insert their genome into a capsid once the capsid is completed, while in other viruses the capsid will wrap around the genome as it is being copied. [ 2 ]
This is the final step before a competent virus is formed. This typically involves capsid modifications that are provided enzymes (host or virus-encoded). [ 3 ]
The final step in viral replication is release, which is when the newly assembled and mature viruses leave the host cell. How a virus releases from the host cell is dependent on the type of virus it is. One common type of release is budding. This occurs when viruses that form their envelope from the host's plasma membrane bend the membrane around the capsid. As the virus bends the plasma membrane it begins to wrap around the whole capsid until the virus is no longer attached to the host cell. Another common way viruses leave the host cell is through cell lysis , where the viruses lyse the cell causing it to burst which releases mature viruses that were in the host cell. [ 3 ]
Viruses are split into seven classes, according to the type of genetic material and method of mRNA production, each of which has its own families of viruses, which in turn have differing replication strategies themselves. [ 15 ] David Baltimore , a Nobel Prize -winning biologist, devised a system called the Baltimore Classification System to classify different viruses based on their unique replication strategy. There are seven different replication strategies based on this system (Baltimore Class I, II, III, IV, V, VI, VII). The seven classes of viruses are listed here briefly and in generalities. [ 16 ]
This type of virus usually must enter the host nucleus before it is able to replicate. Some of these viruses require host cell polymerases to replicate their genome , while others, such as adenoviruses or herpes viruses, encode their own replication factors. However, in either case, replication of the viral genome is highly dependent on a cellular state permissive to DNA replication and, thus, on the cell cycle . The virus may induce the cell to forcefully undergo cell division , which may lead to transformation of the cell and, ultimately, cancer . An example of a family within this classification is the Adenoviridae .
There is only one well-studied example in which a class 1 family of viruses does not replicate within the nucleus. This is the Poxvirus family, which comprises highly pathogenic viruses that infect vertebrates .
Viruses that fall under this category include ones that are not as well-studied, but still do pertain highly to vertebrates. Two examples include the Circoviridae and Parvoviridae . They replicate within the nucleus, and form a double-stranded DNA intermediate during replication. A human Anellovirus called TTV is included within this classification and is found in almost all humans, infecting them asymptomatically in nearly every major organ .
RNA viruses:
The polymerase of RNA viruses lacks the proofreading functions found in the polymerase of DNA viruses. This contributed to RNA viruses having lower replicative fidelity compared to DNA viruses, causing RNA viruses to be highly mutagenic, which can increase their overall survival rate. [ 17 ] RNA viruses lack the capacity to identify and repair mismatched or damaged nucleotides, and thus, RNA genomes are prone to mutations introduced by mechanisms intrinsic and extrinsic to viral replication. [ 18 ] RNA viruses present a therapeutic double-edged sword: RNA viruses can withstand the challenge of antiviral drugs, cause epidemics, and infect multiple host species due to their mutagenic nature, making them difficult to treat. However, the reverse transcriptase protein that often comes with the RNA virus can be used as an indirect target for RNA viruses, preventing transcription and synthesis of viral particles. [ 19 ] (This is the basis for anti-AIDs and anti-HIV drugs [ 20 ] )
Like most viruses with RNA genomes, double-stranded RNA viruses do not rely on host polymerases for replication to the extent that viruses with DNA genomes do. Double-stranded RNA viruses are not as well-studied as other classes. This class includes two major families, the Reoviridae and Birnaviridae . Replication is monocistronic and includes individual, segmented genomes, meaning that each of the genes codes for only one protein, unlike other viruses, which exhibit more complex translation.
These viruses consist of two types, however both share the fact that replication is primarily in the cytoplasm, and that replication is not as dependent on the cell cycle as that of DNA viruses. This class of viruses is also one of the most-studied types of viruses, alongside the double-stranded DNA viruses.
The positive-sense RNA viruses and indeed all genes defined as positive-sense can be directly accessed by host ribosomes to immediately form proteins. These can be divided into two groups, both of which replicate in the cytoplasm:
Examples of this class include the families Coronaviridae , Flaviviridae , and Picornaviridae .
The negative-sense RNA viruses and indeed all genes defined as negative-sense cannot be directly accessed by host ribosomes to immediately form proteins. Instead, they must be transcribed by viral polymerases into the "readable" complementary positive-sense. These can also be divided into two groups:
Examples in this class include the families Orthomyxoviridae , Paramyxoviridae , Bunyaviridae , Filoviridae , and Rhabdoviridae (which includes rabies ).
A well-studied family of this class of viruses include the retroviruses . One defining feature is the use of reverse transcriptase to convert the positive-sense RNA into DNA. Instead of using the RNA for templates of proteins, they use DNA to create the templates, which is spliced into the host genome using integrase . Replication can then commence with the help of the host cell's polymerases.
This small group of viruses, exemplified by the Hepatitis B virus, have a double-stranded, gapped genome that is subsequently filled in to form a covalently closed circle ( cccDNA ) that serves as a template for production of viral mRNAs and a subgenomic RNA. The pregenome RNA serves as template for the viral reverse transcriptase and for production of the DNA genome.
|
https://en.wikipedia.org/wiki/Viral_replication
|
Viral shedding is the expulsion and release of virus progeny following successful reproduction during a host cell infection. Once replication has been completed and the host cell is exhausted of all resources in making viral progeny, the viruses may begin to leave the cell by several methods . [ 1 ]
The term is variously used to refer to viral particles shedding from a single cell, from one part of the body into another, [ 2 ] and from a body into the environment, where the virus may infect another host. [ 3 ]
Vaccine shedding is a form of viral shedding which can occur in instances of infection caused by some attenuated (or "live virus") vaccines .
"Budding" through the cell envelope into extracellular space is most effective for viruses that require their own envelope. In effect, the viral envelope is built from a part of the host cell membrane . Examples for viruses that shed through budding include HIV , HSV , SARS , and smallpox . When beginning the budding process, the viral nucleocapsid interacts with a certain region of the host cell membrane. During this interaction, the glycosylated viral envelope protein inserts itself into the cell membrane. In order to successfully bud from the host cell, the nucleocapsid of the virus must form a connection with the cytoplasmic tails of envelope proteins. [ 4 ] Though budding does not immediately destroy the host cell, this process will slowly use up the cell membrane and eventually lead to the cell's demise. This is also how antiviral responses are able to detect virus-infected cells. [ 5 ] Budding has been most extensively studied for viruses of eukaryotes . However, it has been demonstrated that viruses infecting prokaryotes of the domain Archaea also employ this mechanism of virion release. [ 6 ]
Animal cells are programmed to self-destruct when they are under viral attack or damaged in some other way. By forcing the cell to undergo apoptosis or cell suicide, release of progeny into the extracellular space is possible. However, apoptosis does not necessarily result in the cell simply popping open and spilling its contents into the extracellular space. Rather, apoptosis is usually controlled and results in the cell's genome being chopped up, before apoptotic bodies of dead cell material clump off the cell to be absorbed by macrophages . This is a good way for a virus to get into macrophages either to infect them or simply travel to other tissues in the body.
Although this process is primarily used by non-enveloped viruses, enveloped viruses may also use this. HIV is an example of an enveloped virus that exploits this process for the infection of macrophages. [ 7 ]
Viruses that have envelopes that come from nuclear or endosomal membranes can leave the cell via exocytosis , in which the host cell is not destroyed. [ 8 ] Viral progeny are synthesized within the cell, and the host cell's transport system is used to enclose them in vesicles ; the vesicles of virus progeny are carried to the cell membrane and then released into the extracellular space. This is used primarily by non-enveloped viruses, although enveloped viruses display this too. An example is the use of recycling viral particle receptors in the enveloped varicella-zoster virus . [ 9 ]
A human with a viral disease can be contagious if they are shedding virus particles, even if they are unaware of doing so. Some viruses such as HSV-2 (which produces genital herpes ) can cause asymptomatic shedding and therefore spread undetected from person to person, as no fever or other hints reveal the contagious nature of the host. [ 10 ]
|
https://en.wikipedia.org/wiki/Viral_shedding
|
The mammalian immune system has evolved complex methods for addressing and adapting to foreign antigens. At the same time, viruses have co-evolved evasion machinery to address the many ways that host organisms attempt to eradicate them. DNA and RNA viruses use complex methods to evade immune cell detection through disruption of the Interferon Signaling Pathway, remodeling of cellular architecture, targeted gene silencing, and recognition protein cleavage. [ 1 ]
The human immune system relies on a plethora of cell-cell signaling pathways to transmit information about a cell's health and microenvironment. Many of these pathways are mediated by soluble ligands, cytokines, that fit like a lock-and-key into adjacent cell surface receptors. This language of cell communication imparts both specificity and spatiotemporal control for the transmission of data. [ 2 ]
The Interferon System is composed of a family of cytokines . Type-I Interferons, IFN-α/β, and Type-III Interferons, IFN-λ play key roles in adaptive immunity, acting as communication highways between cells infected with foreign double stranded DNA or double stranded RNA. Mammalian cells utilize specialized receptors known as Pattern Recognition Receptors(PRRs) to detect viral infection; these receptors are able to recognize pathogen-associated molecular patterns (PAMPs) inscribed in viral DNA and RNA. These pattern recognition receptors, often localized to either the cytosol or the nucleus, are responsible for notifying infected cells and initiating the secretion of interferon cytokines. [ 3 ]
The precise role of double-stranded (ds)RNA is still widely investigated as a central player in the Interferon System. Groups have found that positive-strand RNA viruses and dsRNA viruses produced significant amounts of dsRNA, but the precise methods mammalian cells leverage to distinguish between self vs. non-self dsRNA have yet to be uncovered. Studies suggest that recognition must extend beyond simple identification of dsRNA structure and likely relies on other epigenetic markers. [ 4 ]
dsRNA has been implicated in the activation of the interferon system through the activation of Protein Kinase R , PKR. Cytoplasmic PKR is often associated with the ribosome in mammalian cells where it is able to recognize double-stranded and single-stranded RNA and subsequently phosphorylate varies substrates, arresting protein synthesis. [ 5 ] The activation of PKR subsequently triggers interferon signaling, initiating cell death in response to viral dsRNA recognition. While the PKR The roles of PKR activation have been deeply studied with groups finding that it is insensitive to the presence of short dsRNA and siRNA but showing significant affinity for dsRNA and ssRNA with secondary structure. [ 4 ]
Groups have found that the Interferon Signaling promotes the activation of a 2'-5'-oligoadenylate synthetase, sensitive to the presence of dsRNA longer than 15 base pairs. Because this mechanism is not sensitive to self vs. non-self dsRNA binding, results indicate overall reduction in protein synthesis but indicated no specificity for a sole reduction of viral protein synthesis. [ 6 ]
In recent years, studies have focused on how viruses evade Pattern Recognition Receptors, target adaptor proteins and their kinases, inhibit transcription factors for Interferon induction, and evade Interferon Stimulated Genes. [ 3 ]
Viruses of the flaviviridae Family, such as hepatitis C virus, have developed complex viral mechanisms to rearrange the cell membrane, creating a membranaceous web designed to house viral replication machinery. These viruses utilize endogenous host cell nuclear pore complex proteins to shield viral RNA from Pattern Recognition Receptors by excluding PRRs from the interior of the viral membrane compartment. By utilizing architectural rearrangement of the membrane, viruses have developed a method to evade cytoplasm localized pattern recognition proteins such as RIG-I. In order to evade pattern recognition, other viruses such as Enterovirus have evolved multi-functional proteins that not only help in viral protein processing but also cleave cytoplasmic recognition proteins MDA5 and RIG-I, further demonstrating the extent to which viruses can reduce Interferon Signaling through various pathways. Other viruses have been reported to target upstream activators of pattern recognition proteins, antagonizing upstream proteins that removed inhibitory post-translational modifications. [ 3 ]
Other viruses utilize host cell proteins to shield viral DNA until it has reached the nucleus. Upon entry into the host cell cytoplasm, the HIV-1 capsid is recognized and bound by cyclophilin A (CypA); this affinity interaction stabilizes the capsid and prevents exposure of the HIV-1 cDNA to pattern recognition receptors in the cytoplasm. This shielding allows the HIV-1 cDNA to translocate to the nucleus where it may begin replication. [ 7 ]
|
https://en.wikipedia.org/wiki/Viral_strategies_for_immune_response_evasion
|
Viral transformation is the change in growth, phenotype , or indefinite reproduction of cells caused by the introduction of inheritable material. Through this process, a virus causes harmful transformations of an in vivo cell or cell culture . The term can also be understood as DNA transfection using a viral vector .
Viral transformation can occur both naturally and medically. Natural transformations can include viral cancers , such as human papillomavirus (HPV) and T-cell Leukemia virus type I . Hepatitis B and C are also the result of natural viral transformation of the host cells. Viral transformation can also be induced for use in medical treatments.
Cells that have been virally transformed can be differentiated from untransformed cells through a variety of growth, surface, and intracellular observations. The growth of transformed cells can be impacted by a loss of growth limitation caused by cell contact, less oriented growth, and high saturation density . Transformed cells can lose their tight junctions , increase their rate of nutrient transfer, and increase their protease secretion. Transformation can also affect the cytoskeleton and change in the quantity of signal molecules .
There are three types of viral infections that can be considered under the topic of viral transformation. These are cytocidal, persistent, and transforming infections. Cytocidal infections can cause fusion of adjacent cells, disruption of transport pathways including ions and other cell signals, disruption of DNA , RNA and protein synthesis, and nearly always leads to cell death. Persistent infections involve viral material that lays dormant within a cell until activated by some stimulus. This type of infection usually causes few obvious changes within the cell but can lead to long chronic diseases. Transforming infections are also referred to as malignant transformation . This infection causes a host cell to become malignant and can be either cytocidal (usually in the case of RNA viruses) or persistent (usually in the case of DNA viruses). Cells with transforming infections undergo immortalization and inherit the genetic material to produce tumors. Since the term cytocidal, or cytolytic, refers to cell death, these three infections are not mutually exclusive. Many transforming infections by DNA tumor viruses are also cytocidal. [ 1 ]
Table 1: Cellular effects of viral infections [ 1 ]
Rounding of the cell Fusion with adjacent cells Appearance of inclusion bodies
Inhibit DNA, RNA, and protein synthesis Interfere with sub-cellular interactions
Insufficient movement of ions Formation of secondary messengers Activation of cellular cascades
Fusion with adjacent cells Appearance of inclusion bodies Budding
Immune responses limit viral spread Antigen-antibody complexes can incorporate viral antigens causing inflammation
Rare until stimulated
Unlimited cell replication
Inactivates tumor suppressor proteins Impairs cell cycle regulation
Unlimited cell replication
Cytocidal infections are often associated with changes in cell morphology, physiology and are thus important for the complete viral replication and transformation. Cytopathic Effects , often include a change in cell's morphology such as fusion with adjacent cells to form polykaryocytes as well as the synthesis of nuclear and cytoplasmic inclusion bodies . Physiological changes include the insufficient movement of ions, formation of secondary messengers, and activation of cellular cascades to continue cellular activity. Biochemically , many viruses inhibit the synthesis of host DNA, RNA, proteins directly or even interfere with protein-protein, DNA-protein, RNA-protein interactions at the subcellular level. Genotoxicity involves breaking, fragmenting, or rearranging chromosomes of the host. Lastly, biologic effects include the viruses' ability to affect the activity of antigens and immunologlobulins in the host cell. [ 1 ]
There are two types of cytocidal infections, productive and abortive. In productive infections, additional infectious viruses are produced. Abortive infections do not produce infectious viruses. One example of a productive cytocidal infection is the herpes virus . [ 2 ]
There are three types of persistent infections, latent, chronic and slow, in which the virus stays inside the host cell for prolonged periods of time. During latent infections there is minimal to no expression of infected viral genome. The genome remains within the host cell until the virus is ready for replication. Chronic infections have similar cellular effects as acute cytocidal infections but there is a limited number of progeny and viruses involved in transformation. Lastly, slow infections have a longer incubation period in which no physiological, morphological or subcellular changes may be involved. [ 1 ]
Transformation infections is limited to abortive or restrictive infections. [ 1 ] This constitutes the broadest category of infections as it can include both cytocidal and persistent infection. Viral transformation is most commonly understood as transforming infections, so the remainder of the article focuses on detailing transforming infections.
In order for a cell to be transformed by a virus , the viral DNA must be entered into the host cell . The simplest consideration is viral transformation of a bacterial cell. This process is called lysogeny . As shown in Figure 2, a bacteriophage lands on a cell and pins itself to the cell. The phage can then penetrate the cell membrane and inject the viral DNA into the host cell. The viral DNA can then either lay dormant until stimulated by a source such as UV light or it can be immediately taken up by the host's genome . In either case the viral DNA will replicate along with the original host DNA during cell replication causing two cells to now be infected with the virus. The process will continue to propagate more and more infected cells. [ 3 ] This process is in contrast to the lytic cycle where a virus only uses the host cell's replication machinery to replicate itself before destroying the host cell. [ 4 ]
The process is similar in animal cells. In most cases, rather than viral DNA being injected into an animal cell, a section of the membrane encases the virus and the cell then absorbs both the virus and the encasing section of the membrane into the cell. This process, called endocytosis , is shown in Figure 3. [ 5 ]
Viral transformation disrupts the normal expression of the host cell's genes in favor of expressing a limited number of viral genes. The virus also can disrupt communication between cells and cause cells to divide at an increased rate. [ 6 ]
Viral transformation can impose characteristically determinable features upon a cell. Typical phenotypic changes include high saturation density, anchorage-independent growth, loss of contact inhibition, loss of orientated growth, immortalization , disruption of the cell's cytoskeleton .
Viral genes are expressed through the use of the host cell's replication machinery; therefore, many viral genes have promoters that support binding of many transcription factors found naturally in the host cells. These transcription factors along with the virus' own proteins can repress or activate genes from both the virus and the host cell's genome. Many viruses can also increase the production of the cell's regulatory proteins. [ 1 ]
Depending on the virus, a variety of genetic changes can occur in the host cell. In the case of a lytic cycle virus, the cell will only survive long enough to the replication machinery to be used to create additional viral units. In other cases, the viral DNA will persist within the host cell and replicate as the cell replicates. This viral DNA can either be incorporated into the host cell's genetic material or persist as a separate genetic vector. Either case can lead to damage of the host cell's chromosomes . It is possible that the damage can be repaired; however, the most common result is an instability in the original genetic material or suppression or alteration of the gene expression. [ 1 ]
An assay is an analytic tool often used in a laboratory setting in order to assess or measure some quality of a target entity. [ 7 ] In virology , assays can be used to differentiate between transformed and non-transformed cells. Varying the assay used, changes the selective pressure on the cells and therefore can change what properties are selected in the transformed cells. [ 6 ]
Three common assays used are the focus forming assay , the Anchorage independent growth assay, and the reduced serum assay .
The focus forming assay (FFA) is used to grow cells containing a transforming oncogene on a monolayer of non-transformed cells. The transformed cells will form raised, dense spots on the sample as they grow without contact inhibition. [ 8 ] This assay is highly sensitive compared to other assays used for viral analysis, such as the yield reduction assay . [ 9 ]
An example of the Anchorage independent growth assay is the soft agar assay. The assay is assessing the cells' ability to grow in a gel or viscous fluid. Transformed cells can grow in this environment and are considered anchorage independent. Cells that can only grow when attached to a solid surface are anchorage dependent untransformed cells. This assay is considered one of the most stringent for detection of malignant transformation [ 10 ]
In a reduced serum assay, cells are assayed by exploiting the changes in cell serum requirements. Non-transformed cells require at least a 5% serum medium in order to grow; however, transformed cells can grow in an environment with significantly less serum. [ 6 ]
Natural transformation is the viral transformation of cells without the interference of medical science. This is the most commonly considered form of viral transformation and includes many cancers and diseases, such as HIV , Hepatitis B , and T-cell Leukemia virus type I .
As many as 20% of human tumors are caused by viruses. [ 11 ] Some such viruses that are commonly recognized include HPV , T-cell Leukemia virus type I , and hepatitis B .
Viral oncogenesis are most common with DNA and RNA tumor viruses, most frequently the retroviruses. [ 12 ] There are two types of oncogenic retroviruses : acute transforming viruses and non-acute transforming viruses. Acute transforming viruses induce a rapid tumor growth since they carry viral oncogenes in their DNA/RNA to induce such growth. An example of an acute transforming virus is the Rous Sarcoma Virus (RSV) that carry the v-src oncogene. v-Src is part of the c-src, which is a cellular proto-oncogene that stimulates rapid cell growth and expansion. A non-acute transforming virus on the other hand induces a slow tumor growth, since it does not carry any viral oncogenes. It induces tumor growth by transcriptionally activating the proto-oncogenes particularly the long terminal repeat (LTR) in the proto-oncogenes. [ 12 ]
Viral Oncogenesis through transformation can occur via 2 mechanisms: [ 1 ]
One or both of these mechanisms can occur in the same host cell.
The Hepatitis B viral protein X is believed to cause hepatocellular carcinoma through transformation, typically of liver cells. The viral DNA is incorporated into the host cell's genome causing rapid cell replication and tumor growth. [ 13 ]
Papillomaviruses typically target epithelial cells and cause everything from warts to cervical cancer. When human papillomavirus (HPV) transforms a cell, it interferes with the function of cellular proteins while degrading other cellular proteins. [ 14 ]
The herpesviruses , Kaposi's sarcoma -associated herpesvirus and Epstein-Barr virus , are believed to cause cancer in humans, such as Kaposi's sarcoma, Burkitt's lymphoma , and nasopharyngeal carcinoma . Although genes have been identified in these viruses that cause transformation, the manner in which the virus transforms and replicates the host cell is not understood. [ 14 ]
The retroviruses include T-cell Leukemia virus type I , HIV , and Rous Sarcoma Virus (RSV). The viral gene tax is expressed when the T-cell Leukemia virus transforms a cell altering the expression of cellular growth control genes and causing the transformed cells to become cancerous. HIV works differently by not directly causing cells to become cancerous but by instead making those infected more susceptible to lymphoma and Kaposi's sarcoma . Many other retroviruses contain the three genes, gag , pol , and env , which do not directly cause transformation or tumor formation. [ 14 ]
Human immunodeficiency virus is a viral infection that targets the lymph nodes . HIV binds to the immune CD4 cell and reverse transcriptase alters the host cell genome to allow integration of the viral DNA via integrase . The virus replicates using the host cell's machinery and then leaves the cell to infect additional cells via budding . [ 15 ]
There are many applications in which viral transformation can be artificially induced in a cell culture in order to treat an illness or other condition. A cell culture is infected with a virus causing the transformation; transformed cells can then be used to either produce treatments or be directly introduced into the body.
Type I interferons (IFNs) are used to treat a wide variety of medical conditions including hepatitis C , cancers, viral and inflammatory diseases. IFNs can either be extracted from a natural source, such as cultured human cells or blood leukocytes , or they can be manufactured with recombinant DNA technologies. Most of these IFN treatments have a low response rate. [ 16 ]
The use of viral transformation of the Epstein-Barr virus (EBV) has been recommended to create personalized IFNs. In this process, primary B lymphocytes are transformed with EBV. These cells can then be used to produce IFNs specific for the patient from which the B lymphocytes were extracted. This personalization decreases the likelihood of an antibody response and therefore increases the effectiveness of the treatment. [ 16 ]
When a virus transforms a cell it often causes cancer by either altering the cells' existing genome or introducing additional genetic material which causes cells to uncontrollably replicate. [ 11 ] It is rarely considered that what causes so much harm also has the capability of reversing the process and slowing the cancer growth or even leading to remission. Viruses transform host cells in order to survive and replicate; however, the immune responses of the host cell are typically compromised during transformation making transformed cells more susceptible to other viruses. [ 17 ]
The idea of using viruses to treat cancers was first introduced in 1951 when a 4-year-old boy suddenly went into a temporary remission from leukemia while he had chickenpox . This led to research in the 1990s where scientists worked to create a strain of the herpes simplex virus strong enough to infect and transform tumor cells but weak enough to leave healthy cells unharmed. Treating patients with viral transformation has the possibility of treating patients more safely and more effectively than using traditional methods, such as chemotherapy . Viruses used in the treatment of cancer gain strength and increase their effectiveness as the multiply in the body while causing only minor side effects, such as nausea , fatigue, and aches. [ 17 ]
|
https://en.wikipedia.org/wiki/Viral_transformation
|
Viral transport medium (VTM) is a solution used to preserve virus specimens after collection so that they can be transported and analysed in a laboratory at a later time. Unless stored in an ultra low temperature freezer or in liquid nitrogen , virus samples, and especially RNA virus samples, are prone to degradation. However, such cooling equipment is seldom available in the field due to their cumbersome size, weight, and in the case of freezers, high energy consumption. Hence, there is a need for VTM; a chemical preservative that can be used at ambient temperature. Chemical components may include saline solution, phosphate-buffered saline (PBS), or fetal bovine serum (FBS). VTM must be sterile to avoid introducing contamination to the specimen. [ 1 ] [ 2 ]
In the United States, the FDA and CDC publish guidelines for VTM production. [ 1 ] [ 2 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Viral_transport_medium
|
A viral vector is a modified virus designed to deliver genetic material into cells . This process can be performed inside an organism or in cell culture . Viral vectors have widespread applications in basic research, agriculture, and medicine.
Viruses have evolved specialized molecular mechanisms to transport their genomes into infected hosts, a process termed transduction . This capability has been exploited for use as viral vectors, which may integrate their genetic cargo—the transgene —into the host genome, although non-integrative vectors are also commonly used. In addition to agriculture and laboratory research, viral vectors are widely applied in gene therapy : as of 2022, all approved gene therapies were viral vector-based. Further, compared to traditional vaccines , the intracellular antigen expression enabled by viral vector vaccines offers more robust immune activation.
Many types of viruses have been developed into viral vector platforms, ranging from retroviruses to cytomegaloviruses . Different viral vector classes vary widely in strengths and limitations, suiting some to specific applications. For instance, relatively non-immunogenic and integrative vectors like lentiviral vectors are commonly employed for gene therapy. Chimeric viral vectors—such as hybrid vectors with qualities of both bacteriophages and eukaryotic viruses—have also been developed.
Viral vectors were first created in 1972 by Paul Berg . Further development was temporarily halted by a recombinant DNA research moratorium following the Asilomar Conference and stringent National Institutes of Health regulations. Once lifted, the 1980s saw both the first recombinant viral vector gene therapy and the first viral vector vaccine. Although the 1990s saw significant advances in viral vectors, clinical trials had a number of setbacks, culminating in Jesse Gelsinger 's death. However, in the 21st century, viral vectors experienced a resurgence and have been globally approved for the treatment of various diseases. They have been administered to billions of patients, notably during the COVID-19 pandemic .
Viruses , infectious agents composed of a protein coat that encloses a genome , are the most numerous biological entities on Earth. [ 1 ] [ 2 ] As they cannot replicate independently, they must infect cells and hijack the host 's replication machinery in order to produce copies of themselves . [ 2 ] Viruses do this by inserting their genome—which can be DNA or RNA , either single-stranded or double-stranded —into the host. [ 3 ] Some viruses may integrate their genome directly into that of the host in the form of a provirus . [ 4 ]
This ability to transfer foreign genetic material has been exploited by genetic engineers to create viral vectors, which can transduce the desired transgene into a target cell. [ 2 ] Viral vectors consists of three components: [ 5 ] [ 6 ]
Viral vectors are routinely used in a basic research setting and can introduce genes encoding, for instance, complementary DNA , short hairpin RNA , or CRISPR/Cas9 systems for gene editing. [ 8 ] Viral vectors are employed for cellular reprogramming, like inducing pluripotent stem cells or differentiating adult somatic cells into different cell types. [ 9 ] Researchers also use viral vectors to create transgenic mice and rats for experiments. [ 10 ] Viral vectors can be used for in vivo imaging via the introduction of a reporter gene . Further, transduction of stem cells can permit the tracing of cell lineage during development . [ 9 ]
Gene therapy seeks to modulate or otherwise affect gene expression via the introduction of a therapeutic transgene. Gene therapy by viral vectors can be performed by in vivo delivery by directly administering the vector to the patient, or ex vivo by extracting cells from the patient, transducing them, and then reintroducing the modified cells into the patient. [ 11 ] Viral vector gene therapies may also be used for plants, tentatively enhancing crop performance or promoting sustainable production. [ 12 ]
There are four broad categories of gene therapy: gene replacement, gene silencing , gene addition, or gene editing. [ 11 ] [ 13 ] Relative to other non-integrative gene therapy approaches, transgenes introduced by viral vectors offer multi-year long expression. [ 14 ]
For use as vaccine platforms, viral vectors can be engineered to carry a specific antigen associated with an infectious disease or a tumor antigen . [ 15 ] [ 16 ] Conventional vaccines are not suitable for protection against some pathogens due to unique immune evasion strategies and differences in pathogenesis. [ 17 ] Viral vector-based vaccines, for instance, could eventually offer immunity against HIV-1 and malaria . [ 18 ]
While traditional subunit vaccines elicit a humoral response, [ 19 ] viral vectors allow for intracellular antigen expression that activates MHC pathways via both direct and crosspresentation pathways. This induces a robust adaptive immune response. [ 20 ] [ 21 ] Viral vector vaccines also have intrinsic adjuvant properties via innate immune system activation and the expression of pathogen-associated molecular patterns , negating the need for any additional adjuvant. [ 22 ] [ 15 ] In addition to a more robust immune response in comparison to other vaccine types, viral vectors offer efficient gene transduction and can target specific cell types. [ 19 ] Pre-existing immunity to the virus used as the vector, however, can be a significant issue. [ 18 ]
Prior to 2020, viral vector vaccines were widely administered but confined to veterinary medicine. [ 22 ] In the global response to the COVID-19 pandemic , viral vector vaccines played a fundamental role and were administered to billions of people, particularly in low and middle-income nations. [ 23 ]
Retroviruses —enveloped RNA viruses—are popular viral vector platforms due to their ability to integrate genetic material into the host genome. [ 2 ] Retroviral vectors comprise two general classes: gamma retroviral and lentiviral vectors. The fundamental difference between the two are that gamma retroviral vectors can only infect dividing cells, while lentiviral vectors can infect both dividing and resting cells. [ 24 ] Notably, retroviral genomes are composed of single-stranded RNA and must be converted to proviral double-stranded DNA, a process known as reverse transcription —before it is integrated into the host genome via viral proteins like integrase . [ 25 ]
The most commonly used gammaretroviral vector is a modified Moloney murine leukemia virus (MMLV), able to transduce various mammalian cell types. MMLV vectors have been associated with some cases of carcinogenesis. [ 26 ] Gammaretroviral vectors have been successfully applied to ex vivo hematopoietic stem cell to treat multiple genetic diseases. [ 27 ]
Most lentiviral vectors are derived from human immunodeficiency virus type 1 (HIV-1), although modified simian immunodeficiency virus (SIV), the feline immunodeficiency virus (FIV), and the equine infectious anaemia virus (EIAV) have also been utilized. [ 24 ] As all functional genes are removed or otherwise mutated, the vectors are not cytopathic and can be engineered to be non-integrative. [ 28 ]
Lentiviral vectors are able to carry up to 10 kb of foreign genetic material, although 3-4 kb was reported as optimal as of 2023. [ 24 ] [ 28 ] Relative to other viral vectors, lentiviral vectors possess the greatest transduction capacity, due to the formation of a three-stranded "DNA flap" during retro-transcription of the single-strand lentiviral RNA to DNA within the host. [ 28 ]
Although largely non-inflammatory, [ 29 ] lentiviral vectors can induce robust adaptive immune responses by memory-type cytotoxic T cells and T helper cells . [ 30 ] This is largely due to lentiviral vectors' high tropism for dendritic cells , which activate T cells. [ 30 ] However, they can infect all types of antigen-presenting cells. [ 31 ] Moreover, as they are the only retroviral vectors able to efficiently transduce both dividing and non-dividing cells, make them the most promising vaccine platforms. [ 31 ] They have also been trialed as vaccines against cancer. [ 32 ]
Lentiviral vectors have been used as in vivo therapies, such as directly treating genetic diseases like haemophilia B and for ex vivo treatments like immune cell modification in CAR T cell therapy . [ 24 ] In 2017, the US Food and Drug Administration (FDA) approved tisagenlecleucel , a lentiviral vector, for acute lymphoblastic leukaemia . [ 33 ]
Adenoviruses are double-stranded DNA viruses belonging to the family Adenoviridae . [ 34 ] [ 35 ] Their relatively large genomes, of approximately 30-45 kb, make them ideal candidates for genetic delivery; [ 34 ] newer adenoviral vectors can carry up to 37 kb of foreign genetic material. [ 36 ] Adenoviral vectors display high transduction efficiency and transgene expression, and can infect both dividing and non-dividing cells. [ 37 ]
The adenoviral capsid, an icosahedron , features a fibre "knob" at each of its 12 vertices. These fibre proteins mediate cell entry—greatly affecting efficacy and contribute to its broad tropism—notably via coxsackie–adenovirus receptors (CARs). [ 34 ] [ 37 ] Adenoviral vectors can induce robust innate and adaptive immune responses. [ 38 ] Its strong immunogenicity is particularly due to the transduction of dendritic cells (DC), upregulating the expression of both MHC I and II molecules and activating the DCs. [ 39 ] They have a strong adjuvant effect, as they display several pathogen-associated molecular patterns . [ 38 ] One disadvantage is that pre-existing immunity to adenovirus serotypes is common, reducing efficacy. [ 37 ] [ 40 ] The use of chimpanzee adenoviruses may circumvent this issue. [ 41 ]
While the activation of both innate and adaptive immune responses is an obstacle for many therapeutic applications, it makes adenenoviral vectors an ideal vaccine platform. [ 35 ] The global response to the COVID-19 pandemic saw the development and use of multiple adenoviral vector vaccines, including Sputnik V , the Oxford–AstraZeneca vaccine , and the Janssen vaccine . [ 42 ]
Adeno-associated viruses (AAVs) are relatively small single-stranded DNA viruses belonging to Parvoviridae and, like lentiviral vectors, AAVs can infect both dividing and non-dividing cells. [ 43 ] AAVs, however, require the presence of a "helper virus" such as an adenovirus or herpes simplex virus to replicate within the host, although it can do so independently if cellular stress is induced or the helper virus genes are carried by the vector. [ 44 ]
AAVs insert themselves into a specific site in the host genome, particularly AAVS1 on chromosome 19 in humans. However, recombinant AAVs have been designed that do not integrate. These are instead stored as episomes that, in non-dividing cells, can last for years. [ 45 ] One disadvantage is that they are not able to carry large amounts of foreign genetic materials. Furthermore, the need to express the complementary strand for its single-stranded genome may delay transgene expression. [ 45 ]
As of 2020, 11 different AAV serotypes—differing by capsid structure and consequently by tropism—had been identified. [ 43 ] The tropism of adeno-associated viral vectors can be tailored by creating recombinant versions from multiple serotypes, termed pseudotyping. [ 43 ] Due to their ability to infect and induce longlasting effects within nondividing cells, AAVs are commonly used in basic neuroscience research. [ 46 ] Following the approval of the AAV Alipogene tiparvovec in Europe in 2012, [ 47 ] in 2017, the FDA approved the first AAV-based in vivo gene therapy— voretigene neparvovec —which treated RPE65-associated Leber congenital amaurosis . [ 33 ] As of 2020, 230 clinical trials using AAV-based treatments were either underway or had been completed. [ 47 ]
Vaccinia virus , a poxvirus , is another promising candidate for viral vector development. [ 48 ] Its use as the smallpox vaccine —first reported by Edward Jenner in 1798—led to the eradication of smallpox and demonstrated vaccinia as safe and effective in humans. [ 49 ] [ 48 ] Moreover, manufacturing procedures developed to mass-produce smallpox vaccine stockpiles may expedite vaccinia viral vector production. [ 50 ]
Vaccinia possesses a large DNA genome and can consequently carry up to 40 kb of foreign DNA. [ 49 ] [ 51 ] [ 52 ] [ 51 ] Further, vaccinia are unlikely to integrate into the host genome, decreasing the chance of carcinogenesis. [ 51 ] Attenuated strains—replicating and non-replicating—have been developed. [ 49 ] Although widely characterized due to its use against smallpox, as of 2019 the function of 50 percent of the vaccinia genome was unknown. This may lead to unpredictable effects. [ 52 ]
As a vaccine platform, vaccinia vectors display highly effective transgene expression and create a robust immune response. [ 50 ] The virus fast-acting: its life cycle produces mature progeny vaccinia within 6 hours, and has three viral spread mechanisms. [ 52 ] Vaccinia also has an adjuvant effect , activating a strong innate response via toll-like receptors . [ 50 ] A significant disadvantage that can reduce its efficacy, however, is pre-existing immunity against vaccinia in those who received the smallpox vaccine. [ 50 ]
Of the nine herpesviruses that infect humans, herpes simplex virus 1 (HSV-1) is the most well characterized and most commonly used as a viral vector. [ 53 ] HSV-1 offers several advantages: it has broad tropism and can deliver therapeutics via specialized expression systems. [ 54 ] Moreover, HSV-1 can cross the blood brain barrier if medically-disrupted, enabling it to target neurological diseases. Also, HSV-1 does not integrate into the host genome and can carry large amounts of foreign DNA. The former feature prevents harmful mutagenesis, as can occur with retroviral and adeno-associated vectors. Replication-deficient strains have been developed. [ 55 ]
In 2015, talimogene laherparepvec —an HSV-1 vector that triggers an anti-tumor immune response—was approved by the FDA to treat melanoma . [ 56 ] As of 2020, HSV-1 vectors have been experimentally applied against sarcomas and cancers of the brain, colon, prostate, and skin. [ 57 ]
Cytomegalovirus (CMV), a herpesvirus, has also been developed for use as a viral vector. [ 58 ] CMV can infect most cell types and can thus proliferate throughout the body. Although a CMV-based vaccine provided significant immunity against SIV—closely related to HIV—in macaques, development of CMV as a reliable vector was reported to still be in early stages as of 2020. [ 59 ] [ 60 ]
Plant viruses are also engineered viral vectors for use in agriculture, horticulture, and biologic production. [ 61 ] These vectors have been employed for a range of applications, from increasing the aesthetic quality of ornamental plants to pest biocontrol , rapid expression of recombinant proteins and peptides, and to accelerate crop breeding. [ 62 ] The use of engineered plant viruses has been proposed to enhance crop performance and promote sustainable production. [ 12 ]
Replicating virus-based vectors are typically used. [ 63 ] RNA viruses used for monocots include wheat streak mosaic virus and barley stripe mosaic virus and, for dicots, tobacco rattle virus . Single-stranded DNA viruses like geminiviruses have also been utilized. [ 63 ] Viral vectors can be administered to plants via several pathways termed "agro-inoculation", including via rubbing, a biolistic delivery system , agrospray, agroinjection, and even via insect vectors . [ 64 ] [ 62 ] However, Agrobacterium -mediated delivery of viral vectors—in which bacteria are transformed with plasmid DNA encoding the viral vector construct—is the most common approach. [ 65 ]
Chimeric vectors combining both bacteriophages and eukaryotic viruses have been developed and are capable of infecting eukaryotic cells. [ 66 ] [ 67 ] Unlike eukaryotic virus-based vectors, such bacteriophage vectors have no innate tropism for eukaryotic cells, allowing them to be engineered to be highly specific for cancer cells. [ 68 ]
Bacteriophage vectors are also commonly used in molecular biology. [ 69 ] For instance, bacteriophage vectors are used in phage-assisted continuous evolution , promoting rapid mutagenesis of bacteria. [ 70 ] Although limited to mycobacteriophages and some phages of gram-negative bacteria , bacteriophages can be used for direct cloning. [ 71 ]
Viral vector manufacturing methods often vary by vector, although most utilize an adherent or suspension-based system with mammalian cells. [ 72 ] For viral vector production on a smaller, laboratory setting, static cell culture systems like Petri dishes are typically used. [ 73 ]
Those techniques used in the laboratory are difficult to scale, requiring different approaches on an industrial scale. [ 72 ] Large single-use disposable culture systems and bioreactors are commonly used by manufacturers. [ 72 ] Vessels such as those with gas permeable surfaces are used to maximize cell culture density and solution transducing units. [ 72 ] Depending on the vessel, viruses can be directly isolated from the supernatant or isolated via chemical lysis of the cultured cells or microfluidization. [ 74 ] In 2017, The New York Times reported a manufacturing backlog of inactivated viruses, delaying some gene therapy trials by years. [ 75 ]
In 1972, Stanford University biochemist Paul Berg developed the first viral vector, incorporating DNA from the lambda phage into the polyomavirus SV40 to infect kidney cells maintained in culture. [ 76 ] [ 77 ] [ 78 ] The implications of this achievement troubled scientists like Robert Pollack , who convinced Berg not to transduce DNA from SV40 into E. coli via a bacteriophage vector. They feared that introducing the purportedly cancer-causing genes of SV40 would create carcinogenic bacterial strains. [ 79 ] [ 80 ] These concerns and others in the emerging field of recombinant DNA led to the Asilomar Conference of 1975, where attendees agreed to a voluntary moratorium on cloning DNA . [ 81 ]
In 1977, the National Institutes of Health (NIH) issued formal guidelines confining viral DNA cloning to rigid BSL-4 conditions, practically preventing such research. However, the NIH loosened these rules in 1979, permitting Bernard Moss to develop a viral vector utilizing vaccinia . [ 81 ] In 1982, Moss reported the first use of a viral vector for transient gene expression. [ 18 ] The following year, Moss used the vaccinia vector to express a hepatitis B antigen, creating the first viral vector vaccine. [ 22 ]
Every realm of medicine has its defining moment, often with a human face attached. Polio had Jonas Salk . In vitro fertilization had Louise Brown , the world's first test-tube baby. Transplant surgery had Barney Clark , the Seattle dentist with the artificial heart. AIDS had Magic Johnson . Now gene therapy has Jesse Gelsinger .
Although a failed gene therapy attempt utilizing wild-type Shope papilloma virus had been made as early as 1972, Martin Cline attempted the first gene therapy utilizing recombinant DNA in 1980. It proved unsuccessful. [ 83 ] [ 11 ] In the 1990s, as genetic diseases were further characterized and viral vector technology improved, there was overoptimism about the capabilities the technology. Many clinical trials proved failures. [ 84 ] There were some successes, such as the first effective gene therapy for severe combined immunodeficiency (SCID); it employed a retroviral vector. [ 11 ]
However, during a 1999 clinical trial at the University of Pennsylvania , Jesse Gelsinger died from a fatal reaction to an adenoviral vector-based gene therapy. [ 82 ] [ 84 ] It was the first death related to any form of gene therapy. [ 85 ] Consequently, the FDA suspended all gene therapy trials at the University of Pennsylvania and investigated 60 others across the US. [ 85 ] An anonymous editorial in Nature Medicine noted that it represented a "loss of innocence" for viral vectors. [ 84 ] Shortly thereafter, the field's reputation was further damaged when 5 children treated with a SCID gene therapy developed leukemia due to an issue with the retroviral vector. [ 84 ] [ note 1 ]
Viral vectors experienced a resurgence when they were successfully employed for ex vivo hematopoietic gene delivery in clinical settings. [ 86 ] In 2003, China approved the first gene therapy for clinical use: Gendicine , an adenoviral vector encoding p53 . [ 87 ] [ 88 ] In 2012, the European Union issued its first approval of a gene therapy, an adeno-associated viral vector. [ 89 ] During the COVID-19 pandemic , viral vector vaccines were used to an unprecedented extent: administered to billions of people. [ 90 ] [ 22 ] As of 2022, all approved gene therapies were viral vector-based and over 1000 viral vector clinical trials targeting cancer were underway. [ 86 ]
In film, viral vectors are often portrayed as unintentionally causing a pandemic and civilizational catastrophe. [ 91 ] The 2007 film I Am Legend depicts a cancer-targeting viral vector as unleashing a zombie apocalypse . [ 92 ] [ 93 ] Similarly, a viral vector therapy for Alzheimer's disease in Rise of the Planet of the Apes (2011) becomes a deadly pathogen and causes an ape uprising . Other films featuring viral vectors include The Bourne Legacy (2012) and Resident Evil: The Final Chapter (2016). [ 94 ]
|
https://en.wikipedia.org/wiki/Viral_vector
|
The Virbhadra-Ellis lens equation [ 1 ] in astronomy and mathematics relates to the angular positions of an unlensed source ( β ) {\displaystyle \left(\beta \right)} , the image ( θ ) {\displaystyle \left(\theta \right)} , the Einstein bending angle of light ( α ^ ) {\displaystyle ({\hat {\alpha }})} , and the angular diameter lens -source ( D d s ) {\displaystyle \left(D_{ds}\right)} and observer-source ( D s ) {\displaystyle \left(D_{s}\right)} distances.
This approximate lens equation is useful for studying the gravitational lens in strong and weak gravitational fields when the angular source position is small.
This astrophysics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virbhadra–Ellis_lens_equation
|
Virginia Wood Cornish is the Helena Rubinstein Professor of Chemistry at Columbia University. [ 1 ]
Cornish received her BA in chemistry in 1991, working with professor Ronald Breslow . Her PhD research, on site-specific protein labeling [ 2 ] and mutagenesis , was carried out with Peter Schultz . Cornish was an NSF postdoctoral fellow [ 3 ] at MIT with Robert T. Sauer . She is the first female graduate from Columbia College to be hired to a full-time faculty position since the College became coeducational in 1983. [ 4 ]
Cornish and her lab group use the tools of systems biology , synthetic biology , and DNA encoding to produce desired chemical products from specific organismic hosts. In 2016, she was part of a notable group of genomic scientists calling for increased ethical study and self-regulation as the costs and effort of creating de novo genomes plummeted. As the "read" phase of the Human Genome Project was completed in 2004, this new effort was dubbed Genome Project-Write . [ 5 ]
|
https://en.wikipedia.org/wiki/Virginia_Cornish
|
Virginia H. Holsinger (March 13, 1937 – September 4, 2009) [ 1 ] was a food scientist whose research was significant in the dairy industry . Her research on enzymes as dietary supplements and food treatments was critical to the development of Lactaid and Beano . [ 2 ]
Holsinger was born in Washington, D.C., on March 13, 1937. In 1958, she graduated from the College of William & Mary with a bachelor's degree in chemistry. Afterwards, she joined the Agricultural Research Service within the U.S. Department of Agriculture (USDA), initially working as an analytical chemist at the Agricultural Research Service Dairy Products Laboratory in Washington, D.C. She later attended the Ohio State University where she completed her doctorate in food science and nutrition in 1980 under the direction of Professor Paul M. T. Hansen. Her dissertation was entitled "A Study of the Rehydration Properties of a Milk Analogue Containing Soy Products and Cheese Whey". [ 3 ] [ 4 ]
Holsinger specialized in dairy products for the duration of her scientific career. She transferred to USDA's Eastern Regional Research Center in Wyndmoor, Pennsylvania , in 1974, at which time she led research programs on the basic science and technology of dairy foods for the duration of her career until her retirement in 1999. [ 3 ]
In the early 1980s, Alan Kligerman , who was one of the owners of a family dairy farm , approached Holsinger about the possibility for developing a milk substitute for people who are lactose intolerant . Lactose intolerance is a common condition worldwide. Holsinger subsequently determined that milk could be treated with a lactase enzyme in order to break down the lactose into simple, easily digestible sugars, in particular, glucose and galactose . For this purpose, Holsinger used a lactase derived from fungi . Most lactose intolerant people could digest milk treated in this way without experiencing the symptoms of lactose intolerance. These findings led to Kligerman's commercialization of the Lactaid brand of lactase-treated dairy products. [ 5 ]
Following the success of Lactaid, the U.S. Military approached Holsinger about developing a product designed for soldiers who were lactose intolerant, with the additional requirement that the product be made from dehydrated milk powder. The basis of the additional requirement was that the milk could be reconstituted by soldiers while they were in the field. Holsinger worked with the team that helped her develop Lactaid and successfully developed a lactose free dehydrated milk powder that had long shelf life while retaining good flavor. [ 5 ]
Holsinger's conducted fundamental research on the ability of the enzyme α-galactosidase to convert complex sugars into simple sugars. The simple sugars are more easily digested by the human digestive tract . Her findings were put to use by Kligerman's company to develop Beano, which is a digestive aid to prevent formation of human intestinal gas . [ 5 ]
Holsinger's work and research was also used to develop reduced-fat mozzarella cheese which was widely adapted in school lunch programs . [ 6 ] Holsinger formulated a powder drink mix based on whey and soy drink mix that can be reconstituted with water to provide a milk substitute. This formulation was widely used in emergency relief situations as part of the U.S. Agency for International Development's Food for Peace program. [ 3 ] [ 5 ]
In 1995, Holsinger received the Women in Science and Engineering (WISE) Lifetime Achievement Award, granted by the Agricultural Research Service. The award citation stated that this was "for over 20 years of accomplishment in dairy product research and for aiding the advancement of other women in the fields of science and engineering." [ 7 ]
A year after Holsinger's 1999 retirement as the leader of the Dairy Products Research Unit, she was inducted into the Agricultural Research Service's Hall of Fame for lifetime career achievements. [ 5 ]
In 1986, Holsinger received the Distinguished Service Award of the American Chemical Society's Division of Agricultural & Food Chemistry. Holsinger was an emeritus member of the American Chemical Society, having been a member of the organization since 1959. [ 3 ] [ 8 ]
Holsinger and the rest of the team that developed Lactaid received the Food Technology Industrial Achievement Award by the Institute of Food Technologists in 1987 [ 9 ] and in 2025 she was posthumously inducted into the National Inventors Hall of Fame . [ 10 ]
Holsinger published more than 100 scientific papers in scholarly journals . [ 3 ] Representative examples:
Holsinger died on September 4, 2009, from breast cancer at the age of 72, in Fairfax, Virginia . Holsinger maintained a home in northern Virginia , near where her brother Gordon Holsinger lived, keeping this home even during the time she was working at the USDA facility in Wyndmoor, Pennsylvania. Her brother Gordon Holsinger was her sole survivor at the time of her death. [ 6 ]
|
https://en.wikipedia.org/wiki/Virginia_Holsinger
|
The Virgocentric flow (VCF) is the preferred movement of Local Group galaxies towards the Virgo cluster [ 1 ] caused by its overwhelming gravity , which separates bound objects from the Hubble flow of cosmic expansion . The VCF can refer to the Local Group's movement towards the Virgo Cluster , [ 2 ] since its center is considered synonymous with the Virgo cluster, but more tedious to ascertain due to its much larger volume. The excess velocity of Local Group galaxies towards, and with respect to, the Virgo Cluster are 100 to 400 km/s. [ 3 ] This excess velocity is referred to as each galaxy's peculiar velocity .
This astrophysics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virgocentric_flow
|
Virgínia Sampaio Teixeira Ciminelli (born 1954) is a Brazilian metallurgist specializing in extractive metallurgy , the recovery of metals from metal-containing materials, including hydrometallurgy , electrometallurgy , sustainability in metal extraction, and the restoration of water quality in water returned to the environment from mining. [ 1 ] She is a professor at the Federal University of Minas Gerais , in the Department of Metallurgical and Materials Engineering. [ 2 ]
Ciminelli is originally from Belo Horizonte , [ 1 ] where she was born in 1954. [ 3 ] She was an undergraduate at the Federal University of Minas Gerais, where she graduated in 1976 and continued for a master's degree in 1981. She went to Pennsylvania State University in the US for doctoral study, and completed her Ph.D. in 1987. Her dissertation, Oxidation of Pyrite in Alkaline Solutions and Heterogeneous Equilibria of Sulfur and Arsenic-Containing Minerals In Cyanide Solutions , was supervised by Ghanaian materials scientist Kwadwo Osseo-Asare . [ 2 ]
She has been a professor at the Federal University of Minas Gerais since 1977, and a full professor since 1995, [ 2 ] when she became the first female full professor in the university's school of engineering. [ 1 ] She has been a level 1A researcher of the National Council for Scientific and Technological Development (CNPq) since 2001. [ 2 ]
Ciminelli is a member of the Brazilian Academy of Sciences , elected in 2009 [ 3 ] as the first woman in the engineering section. [ 1 ] She was elected to the Brazilian National Academy of Engineering [ pt ] in 2013. [ 4 ] In 2014, she was elected as a foreign associate of the US National Academy of Engineering , "for contributions in environmental hydrometallurgy, and for leadership in national and international technical collaborations". [ 5 ] She became the fourth Brazilian associate of the National Academy of Engineering. [ 2 ]
She received the Commander's Cross and Grand Cross of the National Order of Scientific Merit in 2010 and 2018 respectively. [ 3 ] The Minas Gerais Society of Engineering named her as engineer of the year in 2021. [ 1 ]
|
https://en.wikipedia.org/wiki/Virgínia_Ciminelli
|
Virial coefficients B i {\displaystyle B_{i}} appear as coefficients in the virial expansion of the pressure of a many-particle system in powers of the density, providing systematic corrections to the ideal gas law . They are characteristic of the interaction potential between the particles and in general depend on the temperature. The second virial coefficient B 2 {\displaystyle B_{2}} depends only on the pair interaction between the particles, the third ( B 3 {\displaystyle B_{3}} ) depends on 2- and non-additive 3-body interactions, and so on.
The first step in obtaining a closed expression for virial coefficients is a cluster expansion [ 1 ] of the grand canonical partition function
Here p {\displaystyle p} is the pressure, V {\displaystyle V} is the volume of the vessel containing the particles, k B {\displaystyle k_{\text{B}}} is the Boltzmann constant , T {\displaystyle T} is the absolute temperature, λ = exp [ μ / ( k B T ) ] {\displaystyle \lambda =\exp[\mu /(k_{\text{B}}T)]} is the fugacity , with μ {\displaystyle \mu } the chemical potential . The quantity Q n {\displaystyle Q_{n}} is the canonical partition function of a subsystem of n {\displaystyle n} particles:
Here H ( 1 , 2 , … , n ) {\displaystyle H(1,2,\ldots ,n)} is the Hamiltonian (energy operator) of a subsystem of n {\displaystyle n} particles. The Hamiltonian is a sum of the kinetic energies of the particles and the total n {\displaystyle n} -particle potential energy (interaction energy). The latter includes pair interactions and possibly 3-body and higher-body interactions. The grand partition function Ξ {\displaystyle \Xi } can be expanded in a sum of contributions from one-body, two-body, etc. clusters. The virial expansion is obtained from this expansion by observing that ln Ξ {\displaystyle \ln \Xi } equals p V / ( k B T ) {\displaystyle pV/(k_{B}T)} . In this manner one derives
These are quantum-statistical expressions containing kinetic energies. Note that the one-particle partition function Q 1 {\displaystyle Q_{1}} contains only a kinetic energy term. In the classical limit ℏ = 0 {\displaystyle \hbar =0} the kinetic energy operators commute with the potential operators and the kinetic energies in numerator and denominator cancel mutually. The trace (tr) becomes an integral over the configuration space. It follows that classical virial coefficients depend on the interactions between the particles only and are given as integrals over the particle coordinates.
The derivation of higher than B 3 {\displaystyle B_{3}} virial coefficients becomes quickly a complex combinatorial problem. Making the classical approximation and
neglecting non-additive interactions (if present), the combinatorics can be handled graphically as first shown by Joseph E. Mayer and Maria Goeppert-Mayer . [ 2 ]
They introduced what is now known as the Mayer function :
and wrote the cluster expansion in terms of these functions. Here u ( | r → 1 − r → 2 | ) {\displaystyle u(|{\vec {r}}_{1}-{\vec {r}}_{2}|)} is the interaction potential between particle 1 and 2 (which are assumed to be identical particles).
The virial coefficients B i {\displaystyle B_{i}} are related to the irreducible Mayer cluster integrals β i {\displaystyle \beta _{i}} through
The latter are concisely defined in terms of graphs.
The rule for turning these graphs into integrals is as follows:
The first two cluster integrals are
The expression of the second virial coefficient is thus:
where particle 2 was assumed to define the origin ( r → 2 = 0 → {\displaystyle {\vec {r}}_{2}={\vec {0}}} ).
This classical expression for the second virial coefficient was first derived by Leonard Ornstein in his 1908 Leiden University Ph.D. thesis.
|
https://en.wikipedia.org/wiki/Virial_coefficient
|
The virial expansion is a model of thermodynamic equations of state . It expresses the pressure P of a gas in local equilibrium as a power series of the density . This equation may be represented in terms of the compressibility factor , Z , as Z ≡ P R T ρ = A + B ρ + C ρ 2 + ⋯ {\displaystyle Z\equiv {\frac {P}{RT\rho }}=A+B\rho +C\rho ^{2}+\cdots } This equation was first proposed by Kamerlingh Onnes . [ 1 ] The terms A , B , and C represent the virial coefficients . The leading coefficient A is defined as the constant value of 1, which ensures that the equation reduces to the ideal gas expression as the gas density approaches zero.
The second, B , and third, C , virial coefficients have been studied extensively and tabulated for many fluids for more than a century. Two of the most extensive compilations are in the books by Dymond [ 2 ] [ 3 ] and the National Institute of Standards and Technology 's Thermo Data Engine Database [ 4 ] and its Web Thermo Tables. [ 5 ] Tables of second and third virial coefficients of many fluids are included in these compilations.
Most equations of state can be reformulated and cast in virial equations to evaluate and compare their implicit second and third virial coefficients. The seminal van der Waals equation of state [ 6 ] was proposed in 1873: P = R T ( v − b ) − a v 2 {\displaystyle P={\frac {RT}{\left(v-b\right)}}-{\frac {a}{v^{2}}}} where v = 1/ ρ is molar volume. It can be rearranged by expanding 1/( v − b ) into a Taylor series : Z = 1 + ( b − a R T ) ρ + b 2 ρ 2 + b 3 ρ 3 + ⋯ {\displaystyle Z=1+\left(b-{\frac {a}{RT}}\right)\rho +b^{2}\rho ^{2}+b^{3}\rho ^{3}+\cdots }
In the van der Waals equation, the second virial coefficient has roughly the correct behavior, as it decreases monotonically when the temperature is lowered. The third and higher virial coefficients are independent of temperature, and are not correct, especially at low temperatures.
Almost all subsequent equations of state are derived from the van der Waals equation, like those from Dieterici, [ 7 ] Berthelot, [ 8 ] Redlich-Kwong, [ 9 ] and Peng-Robinson [ 10 ] suffer from the singularity introduced by 1/(v - b) .
Other equations of state, started by Beattie and Bridgeman, [ 11 ] are more closely related to virial equations, and show to be more accurate in representing behavior of fluids in both gaseous and liquid phases. [ citation needed ] The Beattie-Bridgeman equation of state, proposed in 1928, p = R T v 2 ( 1 − c v T 3 ) ( v + B ) − A v 2 {\displaystyle p={\frac {RT}{v^{2}}}\left(1-{\frac {c}{vT^{3}}}\right)(v+B)-{\frac {A}{v^{2}}}} where
can be rearranged as Z = 1 + ( B 0 − A 0 R T − c T 3 ) ρ − ( B 0 b − A 0 a R T + B 0 c T 3 ) ρ 2 + ( B 0 b c T 3 ) ρ 3 {\displaystyle Z=1+\left(B_{0}-{\frac {A_{0}}{RT}}-{\frac {c}{T^{3}}}\right)\rho -\left(B_{0}b-{\frac {A_{0}a}{RT}}+{\frac {B_{0}c}{T^{3}}}\right)\rho ^{2}+\left({\frac {B_{0}bc}{T^{3}}}\right)\rho ^{3}} The Benedict-Webb-Rubin equation of state [ 12 ] of 1940 represents better isotherms below the critical temperature: Z = 1 + ( B 0 − A 0 R T − C 0 R T 3 ) ρ + ( b − a R T ) ρ 2 + ( α a R T ) ρ 5 + c ρ 2 R T 3 ( 1 + γ ρ 2 ) exp ( − γ ρ 2 ) {\displaystyle Z=1+\left(B_{0}-{\frac {A_{0}}{RT}}-{\frac {C_{0}}{RT^{3}}}\right)\rho +\left(b-{\frac {a}{RT}}\right)\rho ^{2}+\left({\frac {\alpha a}{RT}}\right)\rho ^{5}+{\frac {c\rho ^{2}}{RT^{3}}}\left(1+\gamma \rho ^{2}\right)\exp \left(-\gamma \rho ^{2}\right)}
More improvements were achieved by Starling [ 13 ] in 1972: Z = 1 + ( B 0 − A 0 R T − C 0 R T 3 + D 0 R T 4 − E 0 R T 5 ) ρ + ( b − a R T − d R T 2 ) ρ 2 + α ( a R T + d R T 2 ) ρ 5 + c ρ 2 R T 3 ( 1 + γ ρ 2 ) exp ( − γ ρ 2 ) {\displaystyle Z=1+\left(B_{0}-{\frac {A_{0}}{RT}}-{\frac {C_{0}}{RT^{3}}}+{\frac {D_{0}}{RT^{4}}}-{\frac {E_{0}}{RT^{5}}}\right)\rho +\left(b-{\frac {a}{RT}}-{\frac {d}{RT^{2}}}\right)\rho ^{2}+\alpha \left({\frac {a}{RT}}+{\frac {d}{RT^{2}}}\right)\rho ^{5}+{\frac {c\rho ^{2}}{RT^{3}}}\left(1+\gamma \rho ^{2}\right)\exp \left(-\gamma \rho ^{2}\right)}
Following are plots of reduced second and third virial coefficients against reduced temperature according to Starling: [ 13 ]
The exponential terms in the last two equations correct the third virial coefficient so that the isotherms in the liquid phase can be represented correctly. The exponential term converges rapidly as ρ increases, and if only the first two terms in its Taylor expansion series are taken, 1 − γ ρ 2 {\displaystyle 1-\gamma \rho ^{2}} , and multiplied with 1 + γ ρ 2 {\displaystyle 1+\gamma \rho ^{2}} , the result is 1 − γ 2 ρ 4 {\displaystyle 1-\gamma ^{2}\rho ^{4}} , which contributes a c / R T 3 {\displaystyle c/RT^{3}} term to the third virial coefficient, and one term to the eighth virial coefficient, which can be ignored. [ original research? ]
After the expansion of the exponential terms, the Benedict-Webb-Rubin and Starling equations of state have this form: Z = 1 + b ρ r + c ρ r 2 + f ρ r 5 {\displaystyle Z=1+b\rho _{r}+c\rho _{r}^{2}+f\rho _{r}^{5}}
The three-term virial equation or a cubic virial equation of state Z = 1 + B ρ + C ρ 2 {\displaystyle Z=1+B\rho +C\rho ^{2}} has the simplicity of the Van der Waals equation of state without its singularity at v = b . Theoretically, the second virial coefficient represents bimolecular attraction forces, and the third virial term represents the repulsive forces among three molecules in close contact. [ citation needed ]
With this cubic virial equation, the coefficients B and C can be solved in closed form. Imposing the critical conditions: d P d v = 0 and d 2 P d v 2 = 0 {\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} v}}=0\qquad {\text{and}}\qquad {\frac {\mathrm {d} ^{2}P}{\mathrm {d} v^{2}}}=0} the cubic virial equation can be solved to yield: B = − v c , {\displaystyle B=-v_{c},} C = v c 2 3 , {\displaystyle C={\frac {v_{c}^{2}}{3}},} and Z c = P c v c R T c = 1 3 . {\displaystyle Z_{c}={\frac {P_{c}v_{c}}{RT_{c}}}={\frac {1}{3}}.} Z c {\displaystyle Z_{c}} is therefore 0.333, compared to 0.375 from the Van der Waals equation.
Between the critical point and the triple point is the saturation region of fluids. In this region, the gaseous phase coexists with the liquid phase under saturation pressure P sat {\displaystyle P_{\text{sat}}} , and the saturation temperature T sat {\displaystyle T_{\text{sat}}} . Under the saturation pressure, the liquid phase has a molar volume of v l {\displaystyle v_{\text{l}}} , and the gaseous phase has a molar volume of v g {\displaystyle v_{\text{g}}} . The corresponding molar densities are ρ l {\displaystyle \rho _{\text{l}}} and ρ g {\displaystyle \rho _{\text{g}}} . These are the saturation properties needed to compute second and third virial coefficients.
A valid equation of state must produce an isotherm which crosses the horizontal line of P sat {\displaystyle P_{\text{sat}}} at v l {\displaystyle v_{\text{l}}} and v g {\displaystyle v_{\text{g}}} , on T sat {\displaystyle T_{\text{sat}}} . [ citation needed ] Under P sat {\displaystyle P_{\text{sat}}} and T sat {\displaystyle T_{\text{sat}}} , gas is in equilibrium with liquid. This means that the PρT isotherm has three roots at P sat {\displaystyle P_{\text{sat}}} . The cubic virial equation of state at T sat {\displaystyle T_{\text{sat}}} is: P sat = R T sat ( 1 + B ρ + C ρ 2 ) ρ {\displaystyle P_{\text{sat}}=RT_{\text{sat}}\left(1+B\rho +C\rho ^{2}\right)\rho } It can be rearranged as: 1 − R T sat P sat ( 1 + B ρ + C ρ 2 ) ρ = 0 {\displaystyle 1-{\frac {RT_{\text{sat}}}{P_{\text{sat}}}}\left(1+B\rho +C\rho ^{2}\right)\rho =0} The factor R T sat / P sat {\displaystyle RT_{\text{sat}}/P_{\text{sat}}} is the volume of saturated gas according to the ideal gas law, and can be given a unique name v id {\displaystyle v^{\text{id}}} : v id = R T sat P sat {\displaystyle v^{\text{id}}={\frac {RT_{\text{sat}}}{P_{\text{sat}}}}} In the saturation region, the cubic equation has three roots, and can be written alternatively as: ( 1 − v l ρ ) ( 1 − v m ρ ) ( 1 − v g ρ ) = 0 {\displaystyle \left(1-v_{\text{l}}\rho \right)\left(1-v_{\text{m}}\rho \right)\left(1-v_{\text{g}}\rho \right)=0} which can be expanded as: 1 − ( v l + v g + v m ) ρ + ( v l v g + v g v m + v m v l ) ρ 2 − v l v g v m ρ 3 = 0 {\displaystyle 1-\left(v_{\text{l}}+v_{\text{g}}+v_{m}\right)\rho +\left(v_{\text{l}}v_{\text{g}}+v_{\text{g}}v_{\text{m}}+v_{\text{m}}v_{\text{l}}\right)\rho ^{2}-v_{\text{l}}v_{\text{g}}v_{\text{m}}\rho ^{3}=0} v m {\displaystyle v_{\text{m}}} is a volume of an unstable state between v l {\displaystyle v_{\text{l}}} and v g {\displaystyle v_{\text{g}}} . The cubic equations are identical. Therefore, from the linear terms in these equations, v m {\displaystyle v_{m}} can be solved: v m = v id − v l − v g {\displaystyle v_{\text{m}}=v^{\text{id}}-v_{\text{l}}-v_{\text{g}}} From the quadratic terms, B can be solved: B = − ( v l v g + v g v m + v m v l ) v id {\displaystyle B=-{\frac {\left(v_{\text{l}}v_{\text{g}}+v_{\text{g}}v_{\text{m}}+v_{\text{m}}v_{\text{l}}\right)}{v^{\text{id}}}}} And from the cubic terms, C can be solved: C = v l v g v m v id {\displaystyle C={\frac {v_{\text{l}}v_{\text{g}}v_{\text{m}}}{v^{\text{id}}}}} Since v l {\displaystyle v_{\text{l}}} , v g {\displaystyle v_{\text{g}}} and P sat {\displaystyle P_{\text{sat}}} have been tabulated for many fluids with T sat {\displaystyle T_{\text{sat}}} as a parameter, B and C can be computed in the saturation region of these fluids. The results are generally in agreement with those computed from Benedict-Webb-Rubin and Starling equations of state. [ citation needed ]
|
https://en.wikipedia.org/wiki/Virial_expansion
|
Viridiplantae ( lit. ' green plants ' ; kingdom Plantae sensu stricto ) [ 6 ] is a clade of around 450,000–500,000 species of eukaryotic organisms, most of which obtain their energy by photosynthesis . The green plants are chloroplast -bearing autotrophs that play important primary production roles in both terrestrial and aquatic ecosystems . [ 7 ] They include green algae , which are primarily aquatic, and the land plants ( embryophytes , Plantae sensu strictissimo ), which emerged within freshwater green algae. [ 8 ] [ 9 ] [ 10 ] Green algae traditionally excludes the land plants, rendering them a paraphyletic group, however it is cladistically accurate to think of land plants as a special clade of green algae that evolved to thrive on dry land. [ 11 ] Since the realization that the embryophytes emerged from within the green algae, some authors are starting to include them. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ]
Viridiplantae species all have cells with cellulose in their cell walls , and primary chloroplasts derived from endosymbiosis with cyanobacteria that contain chlorophylls a and b and lack phycobilins . Corroborating this, a basal phagotroph Archaeplastida group has been found in the Rhodelphydia . [ 16 ] In some classification systems, the group has been treated as a kingdom , [ 17 ] under various names, e.g. Viridiplantae, Chlorobionta , or simply Plantae , the latter expanding the traditional plant kingdom of embryophytes to include the green algae . Adl et al. , who produced a classification for all eukaryotes in 2005, introduced the name Chloroplastida for this group, reflecting the group having primary chloroplasts. They rejected the name Viridiplantae on the grounds that some of the species are not plants as understood traditionally. [ 18 ] Together with Rhodophyta , glaucophytes and other basal groups, Viridiplantae belong to a larger clade called Archaeplastida which in itself is sometimes described as Plantae sensu lato .
Leliaert et al , 2012 propose the following simplified taxonomy of the Viridiplantae. [ 19 ]
In 2019, a phylogeny based on genomes and transcriptomes from 1,153 plant species was proposed. [ 21 ] The placing of algal groups is supported by phylogenies based on genomes from the Mesostigmatophyceae and Chlorokybophyceae that have since been sequenced. Both the "chlorophyte algae" and the "streptophyte algae" are treated as paraphyletic (vertical bars beside phylogenetic tree diagram) in this analysis. [ 22 ] [ 23 ] The classification of Bryophyta is supported both by Puttick et al. 2018, [ 24 ] and by phylogenies involving the hornwort genomes that have also since been sequenced. [ 25 ] [ 26 ]
Rhodophyta
Glaucophyta
Prasinodermophyta
Chlorophyta
Mesostigmatophyceae
Chlorokybophyceae
Spirotaenia
Klebsormidiales
Chara
Coleochaetales
Zygnematophyceae
Hornworts
Liverworts
Mosses
Lycophytes
Ferns
Gymnosperms
Angiosperms
Ancestrally, the green algae were flagellates. [ 19 ]
|
https://en.wikipedia.org/wiki/Viridiplantae
|
In September 2021, Synthetic Genomics Inc. ( SGI ), a private company located in La Jolla , California, changed its name to Viridos. [ 1 ] The company is focused on the field of synthetic biology , especially harnessing photosynthesis with micro algae to create alternatives to fossil fuels. [ 2 ] Viridos designs and builds biological systems to address global sustainability problems.
Synthetic biology is an interdisciplinary branch of biology and engineering, combining fields such as biotechnology , evolutionary biology , molecular biology , systems biology , biophysics , computer engineering , and genetic engineering . Synthetic Genomics uses techniques such as software engineering , bioprocessing , bioinformatics , biodiscovery, analytical chemistry , fermentation , cell optimization , and DNA synthesis to design and build biological systems. The company produces or performs research in the fields of sustainable bio-fuels , insect resistant crops, transplantable organs , targeted medicines, DNA synthesis instruments as well as a number of biological reagents .
SGI mainly operates in three end markets : research, bioproduction and applied products. The research segment focuses on genomics solutions for academic and commercial research organizations . The commercial products and services include instrumentation , reagents , DNA synthesis services, and bioinformatics services and software . In 2015, the company launched the BioXP 3200 system, [ 3 ] a fully automated benchtop instrument that produces DNA fragments from many different sources for genomic data.
The company's efforts in bio-based production are intended to improve both existing production hosts and develop entirely new synthetic production hosts with the goal of more efficient routes to bioproducts .
SGI has a number of commercial as well as research and development stage programs across a variety of industries. Some of these research partnerships include:
Synthetic Genomics was founded in the spring of 2005 by J. Craig Venter , Nobel Laureate Hamilton O. Smith , Juan Enriquez, and David Kiernan. Venter (and Smith)'s previous company, Celera Genomics , was a driving force in the race to sequence the human genome . [ 9 ] The firm takes its name from the phrase synthetic genomics which is a scientific discipline of synthetic biology related to the generation of organisms artificially using genetic material. [ 10 ] [ 11 ]
Many of SGI's collaborations have been with energy companies. In 2007, SGI worked with BP to commercialize microbial-based processes for increasing the conversion and recovery of subsurface hydrocarbons. [ 12 ] In 2009, SGI received funding from ExxonMobil to produce biofuels on an industrial-scale using recombinant algae and other microorganisms. [ 13 ] [ 14 ] The company purchased an 81-acre site (33 ha) in the Imperial Valley in Southern California to produce algae fuel for their collaboration with Exxon Mobil. [ 15 ] They also signed a collaborative agreement with New England Biolabs to Launch Gibson Assembly Master Mix Product for Synthetic and Molecular Biology Applications in 2012. [ 16 ]
In 2010, Synthetic Genomics spun off a new subsidiary, Synthetic Genomics Vaccines Inc., to develop next generation vaccines [ 17 ]
In 2014 SGI expanded into the field of organ transplantation with a collaborative agreement with United Therapeutics valued at $50M [ 18 ] and brought in Oliver Fetzer as CEO . [ 19 ]
|
https://en.wikipedia.org/wiki/Viridos_(company)
|
Virilization or masculinization is the biological development of adult male characteristics in young males or females. [ 1 ] Most of the changes of virilization are produced by androgens .
Virilization is a medical term commonly used in three medical and biology of sex contexts: prenatal biological sexual differentiation , the postnatal changes of typical chromosomal male (46, XY) puberty , and excessive androgen effects in typical chromosomal females (46, XX). It is also the intended result of androgen replacement therapy in males with delayed puberty and low testosterone .
In the prenatal period, virilization refers to closure of the perineum , thinning and wrinkling (rugation) of the scrotum , growth of the penis, and closure of the urethral groove to the tip of the penis . In this context, masculinization is synonymous with virilization .
Prenatal virilization of XX fetuses and undervirilization of XY fetuses are common causes of ambiguous genitalia such as in conditions like Congenital adrenal hyperplasia and 5α-Reductase 2 deficiency .
For many years, it was widely believed that in mammals , the female is the "default" developmental pathway, and the SRY gene on the Y chromosome is responsible for suppressing the development of female characteristics and stimulating males characteristics. In this scenario, an embryo would passively develop female sexual characteristics without intervention by the SRY gene. However, in the early 2000s, other genes, such as WNT4 and RSPO1 , were discovered that perform the opposite function – i.e., genes which suppress masculinization and stimulate feminization. [ 2 ]
Two processes: defeminization , and masculinization, are involved in producing male typical morphology and behavior.
Prenatal virilization of a genetically female fetus can occur when an excessive amount of androgen is produced by the fetal adrenal glands or is present in maternal blood, resulting in virilization of the female genitalia such as an enlarged clitoris.
It can also be associated with progestin-induced virilisation .
Undervirilization can occur if a genetic male cannot produce enough androgen or the body tissues cannot respond to it. Extreme undervirilization occurs when no significant androgen hormones can be produced or the body is completely insensitive to androgens, in which case a female phenotype will develop. Partial undervirilization produces ambiguous genitalia part-way between male and female. Examples of undervirilization in fetuses with a 46,XY karyotype are androgen insensitivity syndrome and 5 alpha reductase deficiency .
In common as well as medical usage, virilization often refers to the process of normal male puberty . These effects include growth of the penis and the testes, accelerated growth, development of pubic hair , and other androgenic hair of face, torso, and limbs, deepening of the voice , increased musculature, thickening of the jaw, prominence of the neck cartilage, and broadening of the shoulders.
Virilization can occur in childhood in both males and females due to excessive amounts of androgens. Typical effects of virilization in children are pubic hair , accelerated growth and bone maturation, increased muscle strength , acne , and adult body odor. In males, virilization may signal precocious puberty , while congenital adrenal hyperplasia and androgen producing tumors (usually) of the gonads or adrenals are occasional causes in both sexes. [ 3 ]
Virilization in females can manifest as clitoral enlargement, increased muscle strength, acne, hirsutism , frontal hair thinning, deepening of the voice, menstrual disruption due to anovulation , and a strengthened libido. [ 4 ] Some of the possible causes of virilization in females are:
Transgender people who were medically assigned female at birth sometimes elect to take hormone replacement therapy . This process causes virilization by inducing many of the effects of a typically male puberty. Many of these effects are permanent, but some effects can be reversed if the transgender individual stops or pauses their medical treatment.
Demasculinization refers to the reversal of virilization. Some but not all aspects of virilization are reversible. Demasculinization occurs naturally with andropause , pathologically with hypogonadism , and artificially or medically with antiandrogens , estrogens , and orchiectomy . It is desired by many transgender women who have undergone the changes of pubertal masculinization. Some virilized traits remain though (such as body hair, a hard jawline and an enlarged larynx ), due to the fashion in which virilization affects a body's physiology.
|
https://en.wikipedia.org/wiki/Virilization
|
The viriome of a habitat or environment is the total virus content within it. [ 1 ] A viriome may relate to the viruses that inhabit a multicellular organism as well as the phages that are residing inside bacteria and archaea .
This term exists in contrast to the virome , which more commonly refers to the collection of nucleic acids contained by viruses in a microbiome .
This virus -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Viriome
|
Virivore (equivalently virovore ) comes from the English prefix viro- meaning virus, derived from the Latin word for poison, [ 1 ] and the suffix -vore from the Latin word vorare , meaning to eat, or to devour; [ 2 ] therefore, a virivore is an organism that consumes viruses. Virivory is a well-described process in which organisms, primarily heterotrophic protists , [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] consume viruses, though some metazoans [ 9 ] [ 10 ] are known to do so, as well.
Viruses are considered a top predator in marine environments, as they can lyse microbes and release nutrients (i.e. the viral shunt ). Viruses also play an important role in the structuring of microbial trophic relationships and regulation of carbon flow. [ 11 ] [ 12 ]
The first described virovore was a small marine flagellate that was shown to ingest and digest virus particles. [ 3 ] Subsequently, numerous studies directly and indirectly demonstrated the consumption of virions. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] In 2022, DeLong et al. showed that over the course of two days, the ciliates Halteria and Paramecium reduced chlorovirus plaque-forming units by up to two orders of magnitude, supporting the idea that nutrients were transferred from the viruses to consumers. [ 8 ]
Furthermore, the Halteria population grew with chlorovirus as the only source of nutrition, and grew minimally in the absence of chlorovirus. [ 8 ] The Paramecium population, however, did not differ in growth when fed chloroviruses compared to the control group. Since the Paramecium population size remained constant in the presence of only cholorviruses, this indicated that Paramecium is capable of maintaining its population size, but not growing using chlorovirus as the sole carbon source. These data showed that some grazers can grow on viruses, but it does not apply to all grazers. It was estimated that Halteria consumed between 10,000 and 1 million viruses per day. It's known that small protists, such as Halteria and Paramecium , are consumed by zooplankton, indicating the movement of viral-derived energy and matter up through the aquatic food web. This contradicts the idea that the viral shunt limits the movement of energy up food webs by cutting off the grazer-microbe interaction. The amount of energy and matter passed up would depend on virion size and nutritional content, which would vary depending on the strain.
Viruses are the most abundant biological entities in the world's oceans. [ 13 ] [ 14 ] [ 15 ] The life cycle of a lytic virus is an important process within the worlds oceans for the cycling of dissolved organic matter and particulate organic matter , i.e. the viral shunt . [ 16 ] [ 13 ] [ 14 ] Viral particles themselves also make up a large proportion of the nitrogen and phosphorus rich particles within the dissolved organic matter pool, as they are made up of lipids, amino acids, nucleic acids, and likely carbon incorporated from host cells. [ 13 ] [ 14 ] It's considered that viruses can complement a grazers diet if ingested, and the microbe is not infected. [ 15 ]
General grazing on viruses is widespread throughout the marine environment, with grazing rates as high as 90.3 mL −1 day −1 . [ 13 ] When both bacteria and viruses are present, viruses can be ingested at rates comparable to bacteria. [ 3 ]
Using Oikopleura dioica and Equid alphaherpesvirus 1 (EhV) as a model, scientists estimated the nutritional gain from viruses; [ 13 ]
It's suggested that in smaller grazers, viruses could potentially have a more significant impact on host nutrition. [ 13 ] For example, in nanoflagellates , the estimated contribution is 9% carbon, 14% nitrogen, and 28% phosphorus. [ 13 ]
While smaller bacteria are the ideal food source for grazers due to their size and carbon content, viruses are small, non-motile, and extremely abundant for grazers making them an alternative nutritional choice. [ 15 ] For general grazers, to obtain the same amount of carbon from viruses that they get from bacteria, they would need to consume 1000 times more viruses. [ 15 ] This does not make viruses the ideal carbon source for grazers. However, there are other benefits to consuming viruses besides growth. Studies show that digested viral particles release amino acids that the grazer can then utilize during their own polypeptide synthesis. [ 15 ]
Trophic interactions between grazers, bacteria, and viruses are important in regulating nutrient and organic matter cycling. [ 15 ] The viral sweep is a mechanism in which grazers cycle carbon back into the classical food web by ingesting viral particles. [ 13 ] Infection of host cells leads to the release of viral progeny, which are subsequently consumed by grazers. [ 13 ] Grazers are then consumed by higher trophic organisms, therefore cycling carbon from viruses back into the classical food web and to higher trophic levels. [ 13 ]
The viral sweep could be affected by many factors such as the size and abundance of the viral particles. [ 13 ] The size of the virus will effect the elemental content of the virus particles. [ 14 ] For example, a virus with a larger capsid will contribute more carbon, and viruses with larger genomes will contribute more nitrogen and phosphorus as a result of the increased nucleic acids. [ 14 ] Additionally, the impact of the viral sweep could be more significant if grazers preying on bacteria infected with viruses are also considered. [ 15 ] Overall, by consuming bacteria and viruses, grazers play an important role in cycling carbon. [ 15 ]
The consumption of viruses is largely based on the feeding behaviour of the organism.
Filter feeding is a type of suspension feeding. [ 17 ] Filter feeders usually actively capture single food particles on cili, hairs, mucus, or other structures. [ 17 ] Researchers used Salpingeoca as a model filter feeder to observe change in viral abundance. Salpingeoca produce lorica to help them attach to the substrate. [ 15 ] They also have one flagellum to create a water current which transports small particles towards them where tiny pseudopodia engulf the prey particles. [ 15 ] When viruses were co-incubated with Salpingeoca, viral abundances decreased steadily over 90 days, showing that filter feeding is an effective mechanism for feeding on viruses. [ 15 ]
Grazers move over surfaces to gather and ingest food as they go. [ 17 ] Researchers used Thaumatomonas coloniensis as a model grazer to observe changes in viral abundances. [ 15 ] T. coloniensis glides along the substrate and produce filopodia , which are used to engulf particles associated with the substrate. [ 15 ] Over the 90 days, viral abundances steadily decreased when co-incubated with T. coloniensis, showing that grazing is an effective mechanism for feeding on viruses. [ 15 ]
Raptorial feeding is a form of active feeding, in which the organism seeks out its prey. [ 15 ] Researchers used Goniomonas truncata as a model of raptorial feeding. [ 15 ] G. truncata is a cryptomonad that has two flagella which are used to swim close to the substrate searching for food, and they have vacuoles to aid in food uptake. [ 15 ] In the presence of G. truncata, viral abundances did not significantly decrease over the course of 90 days. [ 15 ] However, this does not exclude the possibility that viral particles are taken up, and then released back into the environment. [ 15 ] This data shows that raptorial feeding may not be a method of viral grazing, but it may have other ecological implications in terms of viral transmission.
Grazing on viruses differs between viruses, and therefore it is subject to selective feeding. Flagellates are capable of ingesting many viruses of different sizes, with the smallest viruses having the lowest ingestion rate. [ 3 ] There is huge diversity amongst marine viruses , including size, shape, morphology, and surface charge that may influence the selection, and therefore ingestion rates. [ 3 ] Additionally, digestion rates of different viruses by the same flagellate were variable. This implies selection when grazing on viruses. [ 3 ] For example, significant differences in virus removal by Tetrahymena pyriformis was observed when the protist was co-incubated with 13 different types of viruses. [ 18 ] Additionally, the removal rates for the specific viruses were maintained when the protist was co-incubated with multiple viruses at once. [ 18 ] T. pyriformis were able to identify viruses as food, which drives their movement and consumption of certain viruses over others, supporting the idea that some protists are capable of selective grazing. [ 18 ]
Viruses have the capacity to influence the grazing of their host cells during infection, showing that viral infection plays a role in selective grazing. [ 19 ] [ 20 ]
Copepods are a key link in marine food webs as they connect primary and secondary production with higher trophic levels. [ 19 ] When phytoplankton Emiliania huxleyi were infected with the coccolithovirus EhV-86, ingestion of the infected cells by the calanoid copepod Acartia tonsa was significantly reduced compared to non-infected cells, indicating selective grazing against infected cells. [ 19 ] These results suggest that viral infections reduce grazing, and may potentially reduce food web efficiency by keeping the carbon within the viral shunt - microbial loop , and inhibiting the movement of carbon to higher trophic levels. [ 19 ] This emphasizes the importance of the viral sweep for cycling carbon into higher trophic levels.
Conversely, Oxyrrhis marina had a grazing preference for virally infected Emiliania huxleyi. [ 20 ] It's suggested that the preference of infected cells over non-infected cells is due to physiological changes or change in size of the host cell. [ 20 ] O. marina prefer to graze on larger cells as they could potentially get a greater nutritional value from them compared to a smaller cell, which would require the same amount of energy to consume. [ 20 ] Infected E. huxleyi exhibit increased cell size compared to non-infected, making them an ideal prey for O. marina. [ 20 ] Infected E. huxleyi may also be selected for their palatability as a result of physiological changes during infection. [ 20 ] For example, infected cells will have higher nucleic acid content compared to non-infected cells which could improve the nutritional gain to the grazers. [ 20 ] Additionally, grazing activity of O. marina has been linked to prey with lower dimethylsulfoniopropionate lyase (DMSP lyase) activity, as they would produce less of the potentially toxic compound acrylate . [ 20 ] Virally infected E. huxleyi show reduced levels of DMSP lyase activity, which makes them appealing to O. marina by reducing their exposure to harmful compounds. [ 20 ] Lastly, chemical cues such as the release of dimethyl sulfide and hydrogen peroxide during infection likely generate a gradient, making it easier for O. marina to locate the infected E. huxleyi. [ 20 ] Preferential grazing on infected cells would make the carbon available to higher trophic levels by sequestering it in particulate form. [ 20 ]
Overall, grazing on virus particles and virally infected cells are subject to selective grazing.
Studies have shown that viruses may be ingested and digested, or ingested and released back into the environment by grazers. [ 15 ] [ 21 ] The observation that grazers could potentially release viruses back into the environment after ingestion could have significant ecological impacts. [ 21 ]
The ingestion and release of viruses could mediate the transmission and dispersal of viruses in the marine environment. [ 21 ] Using copepods as the model transmission vector, and EhV as the model virus, Frada et al. identified a potential mechanism of viral dispersal in marine environments. [ 21 ]
EhV particles can be consumed by copepods either as individual virion particles or via host cell infection (in this case, infected Emiliania huxleyi). [ 21 ] When infected E. huxleyi was co-incubated with copepods, the fecal pellets produced by the copepods contained an average of 4500 EhVs per pellet. [ 21 ] These virion containing pellets were then co-incubated with a fresh culture of E. huxleyi, and rapid viral-mediated lysis of the host cells was observed. [ 21 ] When EhV particles alone were co-incubated with copepods, i.e. no E. huxleyi, the fecal particles collected did not contain any virion particles. [ 21 ] However, when they fed copepods EhV and Thalassiosira weissflogii , a diatom outside the host range of EhV, the fecal pellets collected contained 200 EhVs per pellet. [ 21 ] These pellets when co-incubated with a fresh E. huxleyi culture were highly infectious and completely killed the culture. [ 21 ] The absence of virion particles in the fecal pellets produced from sole EhV incubation supports the idea that grazers exhibit selective grazing for viruses. EhV can still be taken up by copepods through host cell infection and when in the presence of an ideal food source. [ 21 ] Since viral abundance follows bacterial abundance, it is unlikely that there will be a marine environment where viruses will be the sole nutrient source for grazers. [ 22 ]
The results of this experiment have significant ecological impacts. Copepods are capable of moving up and down the water column, and migrating short distances between feeding zones. [ 21 ] [ 23 ] Specifically, for copepods and EhV, the movement of copepods can transport viruses into new and non-infected populations of E. huxleyi, promoting bloom demise. [ 21 ] Additionally, fecal pellets can sink from the mixed layer into deeper parts of the ocean, where they can be assimilated multiple times. [ 23 ] These two scenarios represent potential mechanisms in which viruses can be introduced into new marine environments.
Grazers are not the only organisms capable of removing viruses from the water column. Non-host organisms such as anemones , polychaeta larvae, sea squirts , crabs, cockles, oysters, and sponges are all capable of significantly reducing the viral abundance. [ 24 ] Sponges were found to have the greatest potential for removing viruses. [ 24 ]
The method in which non-host organisms disrupt the viral-host contact is known as transmission interference. [ 24 ] Non-host organisms can either have a direct impact by removing the host-organisms, or an indirect one by removing the viruses. [ 24 ] These mechanisms cause a reduction in the virus-host contact rates which could significantly impact local microbial population dynamics. [ 24 ]
Non-host organisms are capable of removing viruses at rates comparable to natural food particles, bacterial cells, and algal cells, which is higher when compared to grazers that have a viral clearance rate around 4%. [ 3 ] [ 24 ] In regions of high sponge densities, such as coastal and tropical regions, it is likely that the virus removal rate has been underestimated. [ 24 ] The effective removal of viruses likely has global ecological impacts that have gone unrecognized. [ 24 ]
|
https://en.wikipedia.org/wiki/Virivore
|
ViroCap is a test announced in 2015 by researchers at Washington University in St. Louis which can detect most of the infectious viruses which affect both humans and animals . It was demonstrated to be as sensitive as the various Polymerase chain reaction assays for the viruses. It will not be available for clinical use until validation studies are done, which may take years. [ 1 ] The test examines two million sequences of genetic data from viruses. The research was published in September 2015 in the online journal Genome Research . [ 2 ] [ 3 ]
|
https://en.wikipedia.org/wiki/ViroCap
|
ViroPharma Incorporated was a pharmaceutical company that developed and sold drugs that addressed serious diseases treated by physician specialists and in hospital settings. The company focused on product development activities on viruses and human disease , including those caused by cytomegalovirus (CMV) and hepatitis C virus (HCV) infections. It was purchased by Shire in 2013, with Shire paying around $4.2 billion for the company in a deal that was finalized in January 2014. [ 2 ] ViroPharma was a member of the NASDAQ Biotechnology Index and the S&P 600 .
The company had strategic relationships with GlaxoSmithKline , Schering-Plough , and Sanofi-Aventis . ViroPharma acquired Lev Pharmaceuticals in a merger in 2008. [ 3 ] [ 4 ]
ViroPharma Incorporated was founded in 1994 by Claude H. Nash (Chief Executive Officer), Mark A. McKinlay (Vice President, Research & Development), Marc S. Collett (Vice President, Discovery Research), Johanna A. Griffin (Vice President, Business Development), and Guy D. Diana (Vice President, Chemistry Research.) None of the founders are still with the company.
In November 2014, Shire plc acquired ViroPharma for $4.2 billion. [ 5 ]
Vancocin Pulvules HCl : licensed from Eli Lilly in 2004. [ 6 ] Oral Vancocin is an antibiotic for treatment of staphylococcal enterocolitis and antibiotic associated pseudomembranous colitis caused by Clostridioides difficile .
Maribavir is an oral antiviral drug candidate licensed from GlaxoSmithKline in 2003 for the prevention and treatment of human cytomegalovirus disease in hematopoietic stem cell / bone marrow transplant patients. In February 2006, ViroPharma announced that the United States Food and Drug Administration (FDA) had granted the company fast track status for maribavir. [ 7 ] [ 8 ]
In March 2006, the company announced that a Phase II study with maribavir demonstrated that prophylaxis with maribavir displays strong antiviral activity, as measured by statistically significant reduction in the rate of reactivation of CMV in recipients of hematopoietic stem cell / bone marrow transplants . In an intent-to-treat analysis of the first 100 days after the transplant, the number of subjects who required pre-emptive anti-CMV therapy was statistically significantly reduced ( p-value = 0.051 to 0.001) in each of the maribavir groups compared to the placebo group (57% for placebo vs. 15%, 30%, and 15% for maribavir 100 mg twice daily, 400 mg daily, and 400 mg twice daily, respectively).
ViroPharma conducted a Phase III clinical study to evaluate the prophylactic use for the prevention of cytomegalovirus disease in recipients of allogeneic stem cell transplant patients. In February 2009, ViroPharma announced that the Phase III study failed to achieve its goal, showing no significant difference between maribavir and a placebo in reducing the rate of CMV disease. [ 9 ]
Oral pleconaril was ViroPharma's first compound, licensed from Sanofi in 1995. Pleconaril is active against viruses in the picornavirus family. ViroPharma's first indication was for enteroviral meningitis , but that indication was abandoned when the clinical trials did not demonstrate efficacy.
In 2001, ViroPharma submitted a New Drug Application of pleconaril to the FDA for the common cold . [ 10 ] On 2002-03-19, the FDA Antiviral Advisory Committee recommended that the company had failed to show adequate safety, and the FDA subsequently issued a not-approvable letter. [ 11 ]
In November 2004, ViroPharma licensed pleconaril to Schering-Plough , [ 12 ] who are developing an intranasal formulation for the common cold and asthma exacerbations. ( Schering-Plough Development Pipeline ). In August 2006, Schering-Plough started a Phase II clinical trial .
|
https://en.wikipedia.org/wiki/ViroPharma
|
Virodhamine ( O -arachidonoyl ethanolamine ; O-AEA ) is an endocannabinoid and a nonclassic eicosanoid , derived from arachidonic acid . O -Arachidonoyl ethanolamine is arachidonic acid and ethanolamine joined by an ester linkage, the opposite of the amide linkage found in anandamide . Based on this opposite orientation, the molecule was named virodhamine from the Sanskrit word virodha , which means opposition. It acts as an antagonist of the CB 1 receptor and agonist of the CB 2 receptor . Concentrations of virodhamine in the human hippocampus are similar to those of anandamide , but they are 2- to 9-fold higher in peripheral tissues that express CB 2 . Virodhamine lowers body temperature in mice, demonstrating cannabinoid activity in vivo . [ 1 ]
This article about an alkene is a stub . You can help Wikipedia by expanding it .
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This cannabinoid related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virodhamine
|
Viroinformatics is an amalgamation of virology with bioinformatics , involving the application of information and communication technology in various aspects of viral research.
Currently there are more than 100 web servers and databases harboring knowledge regarding different viruses as well as distinct applications concerning diversity analysis, viral recombination, RNAi studies , drug design , protein–protein interaction , structural analysis etc. [ 1 ]
|
https://en.wikipedia.org/wiki/Viroinformatics
|
Virokines are proteins encoded by some large DNA viruses that are secreted by the host cell and serve to evade the host's immune system . Such proteins are referred to as virokines if they resemble cytokines , growth factors , or complement regulators ; the term viroceptor is sometimes used if the proteins resemble cellular receptors. [ 1 ] A third class of virally encoded immunomodulatory proteins consists of proteins that bind directly to cytokines. [ 2 ] Due to the immunomodulatory properties of these proteins, they have been proposed as potentially therapeutically relevant to autoimmune diseases. [ 3 ]
The primary mechanism of virokine interference with immune signaling is thought to be competitive inhibition of the binding of host signaling molecules to their target receptors. Virokines occupy binding sites on host receptors, thereby inhibiting access by signaling molecules. Viroceptors mimic host receptors and thus divert signaling molecules from finding their targets. Cytokine-binding proteins bind to and sequester cytokines, occluding the binding surface through which they interact with receptors. The effect is to attenuate and subvert host immune response. [ 1 ] [ 2 ]
The term "virokine" was coined by National Institutes of Health virologist Bernard Moss . [ 4 ] [ 5 ] The early 1990s saw several reports of virally encoded proteins with sequence homology to immune proteins, followed by reports of the cowpox and vaccinia viruses directly interfering with key immune regulator IL1B . The first identified virokine was an epidermal growth factor -like protein found in myxoma viruses . [ 6 ]
Much of the early work on virokines involved vaccinia virus, which was discovered to secrete proteins that promote proliferation of neighboring cells and block complement immune activity leading to inflammation . [ 5 ]
The immunomodulatory proteins, including virokines, in the poxvirus family have been extensively studied in the context of the evolution of the family. Virokines in this family are thought to have been acquired from host genes and from other viruses through horizontal gene transfer . [ 7 ] Similar observations have been made in the herpesvirus family; for example, Epstein-Barr virus encodes an interleukin protein with high sequence identity to the human interleukin-10 , suggesting a recent evolutionary origin. [ 3 ] [ 8 ]
|
https://en.wikipedia.org/wiki/Virokine
|
Virology is the scientific study of biological viruses . It is a subfield of microbiology that focuses on their detection, structure, classification and evolution, their methods of infection and exploitation of host cells for reproduction, their interaction with host organism physiology and immunity, the diseases they cause, the techniques to isolate and culture them, and their use in research and therapy.
The identification of the causative agent of tobacco mosaic disease (TMV) as a novel pathogen by Martinus Beijerinck (1898) is now acknowledged as being the official beginning of the field of virology as a discipline distinct from bacteriology . He realized the source was neither a bacterial nor a fungal infection , but something completely different. Beijerinck used the word "virus" to describe the mysterious agent in his ' contagium vivum fluidum ' ('contagious living fluid'). Rosalind Franklin proposed the full structure of the tobacco mosaic virus in 1955.
One main motivation for the study of viruses is because they cause many infectious diseases of plants and animals. [ 1 ] The study of the manner in which viruses cause disease is viral pathogenesis . The degree to which a virus causes disease is its virulence . [ 2 ] These fields of study are called plant virology , animal virology and human or medical virology . [ 3 ]
Virology began when there were no methods for propagating or visualizing viruses or specific laboratory tests for viral infections. The methods for separating viral nucleic acids ( RNA and DNA ) and proteins , which are now the mainstay of virology, did not exist. Now there are many methods for observing the structure and functions of viruses and their component parts. Thousands of different viruses are now known about and virologists often specialize in either the viruses that infect plants, or bacteria and other microorganisms , or animals. Viruses that infect humans are now studied by medical virologists. Virology is a broad subject covering biology, health, animal welfare, agriculture and ecology.
Louis Pasteur was unable to find a causative agent for rabies and speculated about a pathogen too small to be detected by microscopes. [ 4 ] In 1884, the French microbiologist Charles Chamberland invented the Chamberland filter (or Pasteur-Chamberland filter) with pores small enough to remove all bacteria from a solution passed through it. [ 5 ] In 1892, the Russian biologist Dmitri Ivanovsky used this filter to study what is now known as the tobacco mosaic virus : crushed leaf extracts from infected tobacco plants remained infectious even after filtration to remove bacteria. Ivanovsky suggested the infection might be caused by a toxin produced by bacteria, but he did not pursue the idea. [ 6 ] At the time it was thought that all infectious agents could be retained by filters and grown on a nutrient medium—this was part of the germ theory of disease . [ 7 ]
In 1898, the Dutch microbiologist Martinus Beijerinck repeated the experiments and became convinced that the filtered solution contained a new form of infectious agent. [ 8 ] He observed that the agent multiplied only in cells that were dividing, but as his experiments did not show that it was made of particles, he called it a contagium vivum fluidum (soluble living germ) and reintroduced the word virus . Beijerinck maintained that viruses were liquid in nature, a theory later discredited by Wendell Stanley , who proved they were particulate. [ 6 ] In the same year, Friedrich Loeffler and Paul Frosch passed the first animal virus, aphthovirus (the agent of foot-and-mouth disease ), through a similar filter. [ 9 ]
In the early 20th century, the English bacteriologist Frederick Twort discovered a group of viruses that infect bacteria, now called bacteriophages [ 10 ] (or commonly 'phages'), and the French-Canadian microbiologist Félix d'Herelle described viruses that, when added to bacteria on an agar plate , would produce areas of dead bacteria. He accurately diluted a suspension of these viruses and discovered that the highest dilutions (lowest virus concentrations), rather than killing all the bacteria, formed discrete areas of dead organisms. Counting these areas and multiplying by the dilution factor allowed him to calculate the number of viruses in the original suspension. [ 11 ] Phages were heralded as a potential treatment for diseases such as typhoid and cholera , but their promise was forgotten with the development of penicillin . The development of bacterial resistance to antibiotics has renewed interest in the therapeutic use of bacteriophages. [ 12 ]
By the end of the 19th century, viruses were defined in terms of their infectivity , their ability to pass filters, and their requirement for living hosts. Viruses had been grown only in plants and animals. In 1906 Ross Granville Harrison invented a method for growing tissue in lymph , and in 1913 E. Steinhardt, C. Israeli, and R.A. Lambert used this method to grow vaccinia virus in fragments of guinea pig corneal tissue. [ 13 ] In 1928, H. B. Maitland and M. C. Maitland grew vaccinia virus in suspensions of minced hens' kidneys. Their method was not widely adopted until the 1950s when poliovirus was grown on a large scale for vaccine production. [ 14 ]
Another breakthrough came in 1931 when the American pathologist Ernest William Goodpasture and Alice Miles Woodruff grew influenza and several other viruses in fertilised chicken eggs. [ 15 ] In 1949, John Franklin Enders , Thomas Weller , and Frederick Robbins grew poliovirus in cultured cells from aborted human embryonic tissue, [ 16 ] the first virus to be grown without using solid animal tissue or eggs. This work enabled Hilary Koprowski , and then Jonas Salk , to make an effective polio vaccine . [ 17 ]
The first images of viruses were obtained upon the invention of electron microscopy in 1931 by the German engineers Ernst Ruska and Max Knoll . [ 18 ] In 1935, American biochemist and virologist Wendell Meredith Stanley examined the tobacco mosaic virus and found it was mostly made of protein. [ 19 ] A short time later, this virus was separated into protein and RNA parts. [ 20 ] The tobacco mosaic virus was the first to be crystallised and its structure could, therefore, be elucidated in detail. The first X-ray diffraction pictures of the crystallised virus were obtained by Bernal and Fankuchen in 1941. Based on her X-ray crystallographic pictures, Rosalind Franklin discovered the full structure of the virus in 1955. [ 21 ] In the same year, Heinz Fraenkel-Conrat and Robley Williams showed that purified tobacco mosaic virus RNA and its protein coat can assemble by themselves to form functional viruses, suggesting that this simple mechanism was probably the means through which viruses were created within their host cells. [ 22 ]
The second half of the 20th century was the golden age of virus discovery, and most of the documented species of animal, plant, and bacterial viruses were discovered during these years. [ 23 ] In 1957 equine arterivirus and the cause of bovine virus diarrhoea (a pestivirus ) were discovered. In 1963 the hepatitis B virus was discovered by Baruch Blumberg , [ 24 ] and in 1965 Howard Temin described the first retrovirus . Reverse transcriptase , the enzyme that retroviruses use to make DNA copies of their RNA, was first described in 1970 by Temin and David Baltimore independently. [ 25 ] In 1983 Luc Montagnier 's team at the Pasteur Institute in France, first isolated the retrovirus now called HIV. [ 26 ] In 1989 Michael Houghton 's team at Chiron Corporation discovered hepatitis C . [ 27 ] [ 28 ]
There are several approaches to detecting viruses and these include the detection of virus particles (virions) or their antigens or nucleic acids and infectivity assays.
Viruses were seen for the first time in the 1930s when electron microscopes were invented. These microscopes use beams of electrons instead of light, which have a much shorter wavelength and can detect objects that cannot be seen using light microscopes. The highest magnification obtainable by electron microscopes is up to 10,000,000 times [ 29 ] whereas for light microscopes it is around 1,500 times. [ 30 ]
Virologists often use negative staining to help visualise viruses. In this procedure, the viruses are suspended in a solution of metal salts such as uranium acetate. The atoms of metal are opaque to electrons and the viruses are seen as suspended in a dark background of metal atoms. [ 29 ] This technique has been in use since the 1950s. [ 31 ] Many viruses were discovered using this technique and negative staining electron microscopy is still a valuable weapon in a virologist's arsenal. [ 32 ]
Traditional electron microscopy has disadvantages in that viruses are damaged by drying in the high vacuum inside the electron microscope and the electron beam itself is destructive. [ 29 ] In cryogenic electron microscopy the structure of viruses is preserved by embedding them in an environment of vitreous water . [ 33 ] This allows the determination of biomolecular structures at near-atomic resolution, [ 34 ] and has attracted wide attention to the approach as an alternative to X-ray crystallography or NMR spectroscopy for the determination of the structure of viruses. [ 35 ]
Viruses are obligate intracellular parasites and because they only reproduce inside the living cells of a host these cells are needed to grow them in the laboratory. For viruses that infect animals (usually called "animal viruses") cells grown in laboratory cell cultures are used. In the past, fertile hens' eggs were used and the viruses were grown on the membranes surrounding the embryo. This method is still used in the manufacture of some vaccines. For the viruses that infect bacteria, the bacteriophages , the bacteria growing in test tubes can be used directly. For plant viruses, the natural host plants can be used or, particularly when the infection is not obvious, so-called indicator plants, which show signs of infection more clearly. [ 36 ] [ 37 ]
Viruses that have grown in cell cultures can be indirectly detected by the detrimental effect they have on the host cell. These cytopathic effects are often characteristic of the type of virus. For instance, herpes simplex viruses produce a characteristic "ballooning" of the cells, typically human fibroblasts . Some viruses, such as mumps virus cause red blood cells from chickens to firmly attach to the infected cells. This is called "haemadsorption" or "hemadsorption". Some viruses produce localised "lesions" in cell layers called plaques , which are useful in quantitation assays and in identifying the species of virus by plaque reduction assays . [ 38 ] [ 39 ]
Viruses growing in cell cultures are used to measure their susceptibility to validated and novel antiviral drugs . [ 40 ]
Viruses are antigens that induce the production of antibodies and these antibodies can be used in laboratories to study viruses. Related viruses often react with each other's antibodies and some viruses can be named based on the antibodies they react with. The use of the antibodies which were once exclusively derived from the serum (blood fluid) of animals is called serology . [ 41 ] Once an antibody–reaction has taken place in a test, other methods are needed to confirm this. Older methods included complement fixation tests , [ 42 ] hemagglutination inhibition and virus neutralisation . [ 43 ] Newer methods use enzyme immunoassays (EIA). [ 44 ]
In the years before PCR was invented immunofluorescence was used to quickly confirm viral infections. It is an infectivity assay that is virus species specific because antibodies are used. The antibodies are tagged with a dye that is luminescencent and when using an optical microscope with a modified light source, infected cells glow in the dark. [ 45 ]
PCR is a mainstay method for detecting viruses in all species including plants and animals. It works by detecting traces of virus specific RNA or DNA. It is very sensitive and specific, but can be easily compromised by contamination. Most of the tests used in veterinary virology and medical virology are based on PCR or similar methods such as transcription mediated amplification . When a novel virus emerges, such as the covid coronavirus, a specific test can be devised quickly so long as the viral genome has been sequenced and unique regions of the viral DNA or RNA identified. [ 46 ] The invention of microfluidic tests as allowed for most of these tests to be automated, [ 47 ] Despite its specificity and sensitivity, PCR has a disadvantage in that it does not differentiate infectious and non-infectious viruses and "tests of cure" have to be delayed for up to 21 days to allow for residual viral nucleic acid to clear from the site of the infection. [ 48 ]
In laboratories many of the diagnostic test for detecting viruses are nucleic acid amplification methods such as PCR. Some tests detect the viruses or their components as these include electron microscopy and enzyme-immunoassays . The so-called "home" or "self"-testing gadgets are usually lateral flow tests , which detect the virus using a tagged monoclonal antibody . [ 49 ] These are also used in agriculture, food and environmental sciences. [ 50 ]
Counting viruses (quantitation) has always had an important role in virology and has become central to the control of some infections of humans where the viral load is measured. [ 51 ] There are two basic methods: those that count the fully infective virus particles, which are called infectivity assays, and those that count all the particles including the defective ones. [ 29 ]
Infectivity assays measure the amount (concentration) of infective viruses in a sample of known volume. [ 52 ] For host cells, plants or cultures of bacterial or animal cells are used. Laboratory animals such as mice have also been used particularly in veterinary virology. [ 53 ] These are assays are either quantitative where the results are on a continuous scale or quantal, where an event either occurs or it does not. Quantitative assays give absolute values and quantal assays give a statistical probability such as the volume of the test sample needed to ensure 50% of the hosts cells, plants or animals are infected. This is called the median infectious dose or ID 50 . [ 54 ] Infective bacteriophages can be counted by seeding them onto "lawns" of bacteria in culture dishes. When at low concentrations, the viruses form holes in the lawn that can be counted. The number of viruses is then expressed as plaque forming units . For the bacteriophages that reproduce in bacteria that cannot be grown in cultures, viral load assays are used. [ 55 ]
The focus forming assay (FFA) is a variation of the plaque assay, but instead of relying on cell lysis in order to detect plaque formation, the FFA employs immunostaining techniques using fluorescently labeled antibodies specific for a viral antigen to detect infected host cells and infectious virus particles before an actual plaque is formed. The FFA is particularly useful for quantifying classes of viruses that do not lyse the cell membranes, as these viruses would not be amenable to the plaque assay. Like the plaque assay, host cell monolayers are infected with various dilutions of the virus sample and allowed to incubate for a relatively brief incubation period (e.g., 24–72 hours) under a semisolid overlay medium that restricts the spread of infectious virus, creating localized clusters (foci) of infected cells. Plates are subsequently probed with fluorescently labeled antibodies against a viral antigen, and fluorescence microscopy is used to count and quantify the number of foci. The FFA method typically yields results in less time than plaque or fifty-percent-tissue-culture-infective-dose (TCID 50 ) assays, but it can be more expensive in terms of required reagents and equipment. Assay completion time is also dependent on the size of area that the user is counting. A larger area will require more time but can provide a more accurate representation of the sample. Results of the FFA are expressed as focus forming units per milliliter, or FFU/ [ 56 ]
When an assay for measuring the infective virus particle is done (Plaque assay, Focus assay), viral titre often refers to the concentration of infectious viral particles, which is different from the total viral particles. Viral load assays usually count the number of viral genomes present rather than the number of particles and use methods similar to PCR . [ 57 ] Viral load tests are an important in the control of infections by HIV. [ 58 ] This versatile method can be used for plant viruses. [ 59 ] [ 60 ]
Molecular virology is the study of viruses at the level of nucleic acids and proteins. The methods invented by molecular biologists have all proven useful in virology. Their small sizes and relatively simple structures make viruses an ideal candidate for study by these techniques.
For further study, viruses grown in the laboratory need purifying to remove contaminants from the host cells. The methods used often have the advantage of concentrating the viruses, which makes it easier to investigate them.
Centrifuges are often used to purify viruses. Low speed centrifuges, i.e. those with a top speed of 10,000 revolutions per minute (rpm) are not powerful enough to concentrate viruses, but ultracentrifuges with a top speed of around 100,000 rpm, are and this difference is used in a method called differential centrifugation . In this method the larger and heavier contaminants are removed from a virus mixture by low speed centrifugation. The viruses, which are small and light and are left in suspension, are then concentrated by high speed centrifugation. [ 62 ]
Following differential centrifugation, virus suspensions often remain contaminated with debris that has the same sedimentation coefficient and are not removed by the procedure. In these cases a modification of centrifugation, called buoyant density centrifugation , is used. In this method the viruses recovered from differential centrifugation are centrifuged again at very high speed for several hours in dense solutions of sugars or salts that form a density gradient, from low to high, in the tube during the centrifugation. In some cases, preformed gradients are used where solutions of steadily decreasing density are carefully overlaid on each other. Like an object in the Dead Sea , despite the centrifugal force the virus particles cannot sink into solutions that are more dense than they are and they form discrete layers of, often visible, concentrated viruses in the tube. Caesium chloride is often used for these solutions as it is relatively inert but easily self-forms a gradient when centrifuged at high speed in an ultracentrifuge. [ 61 ] Buoyant density centrifugation can also be used to purify the components of viruses such as their nucleic acids or proteins. [ 63 ]
The separation of molecules based on their electric charge is called electrophoresis . Viruses and all their components can be separated and purified using this method. This is usually done in a supporting medium such as agarose and polyacrylamide gels . The separated molecules are revealed using stains such as coomasie blue , for proteins, or ethidium bromide for nucleic acids. In some instances the viral components are rendered radioactive before electrophoresis and are revealed using photographic film in a process known as autoradiography . [ 64 ]
As most viruses are too small to be seen by a light microscope, sequencing is one of the main tools in virology to identify and study the virus. Traditional Sanger sequencing and next-generation sequencing (NGS) are used to sequence viruses in basic and clinical research, as well as for the diagnosis of emerging viral infections, molecular epidemiology of viral pathogens, and drug-resistance testing. There are more than 2.3 million unique viral sequences in GenBank. [ 65 ] NGS has surpassed traditional Sanger as the most popular approach for generating viral genomes. [ 65 ] Viral genome sequencing as become a central method in viral epidemiology and viral classification .
Data from the sequencing of viral genomes can be used to determine evolutionary relationships and this is called phylogenetic analysis . [ 66 ] Software, such as PHYLIP , is used to draw phylogenetic trees . This analysis is also used in studying the spread of viral infections in communities ( epidemiology ). [ 67 ]
When purified viruses or viral components are needed for diagnostic tests or vaccines, cloning can be used instead of growing the viruses. [ 68 ] At the start of the COVID-19 pandemic the availability of the severe acute respiratory syndrome coronavirus 2 RNA sequence enabled tests to be manufactured quickly. [ 69 ] There are several proven methods for cloning viruses and their components. Small pieces of DNA called cloning vectors are often used and the most common ones are laboratory modified plasmids (small circular molecules of DNA produced by bacteria). The viral nucleic acid, or a part of it, is inserted in the plasmid, which is the copied many times over by bacteria. This recombinant DNA can then be used to produce viral components without the need for native viruses. [ 70 ]
The viruses that reproduce in bacteria, archaea and fungi are informally called "phages", [ 71 ] and the ones that infect bacteria – bacteriophages – in particular are useful in virology and biology in general. [ 72 ] Bacteriophages were some of the first viruses to be discovered, early in the twentieth century, [ 73 ] and because they are relatively easy to grow quickly in laboratories, much of our understanding of viruses originated by studying them. [ 73 ] Bacteriophages, long known for their positive effects in the environment, are used in phage display techniques for screening proteins DNA sequences. They are a powerful tool in molecular biology. [ 74 ]
All viruses have genes which are studied using genetics . [ 75 ] All the techniques used in molecular biology, such as cloning, creating mutations RNA silencing are used in viral genetics. [ 76 ]
Reassortment is the switching of genes from different parents and it is particularly useful when studying the genetics of viruses that have segmented genomes (fragmented into two or more nucleic acid molecules) such as influenza viruses and rotaviruses . The genes that encode properties such as serotype can be identified in this way. [ 77 ]
Often confused with reassortment, recombination is also the mixing of genes but the mechanism differs in that stretches of DNA or RNA molecules, as opposed to the full molecules, are joined during the RNA or DNA replication cycle. Recombination is not as common as reassortment in nature but it is a powerful tool in laboratories for studying the structure and functions of viral genes. [ 78 ]
Reverse genetics is a powerful research method in virology. [ 79 ] In this procedure complementary DNA (cDNA) copies of virus genomes called "infectious clones" are used to produce genetically modified viruses that can be then tested for changes in say, virulence or transmissibility. [ 80 ]
A major branch of virology is virus classification . It is artificial in that it is not based on evolutionary phylogenetics but it is based shared or distinguishing properties of viruses. [ 81 ] [ 82 ] It seeks to describe the diversity of viruses by naming and grouping them on the basis of similarities. [ 83 ] In 1962, André Lwoff , Robert Horne , and Paul Tournier were the first to develop a means of virus classification, based on the Linnaean hierarchical system. [ 84 ] This system based classification on phylum , class , order , family , genus , and species . Viruses were grouped according to their shared properties (not those of their hosts) and the type of nucleic acid forming their genomes. [ 85 ] In 1966, the International Committee on Taxonomy of Viruses (ICTV) was formed. The system proposed by Lwoff, Horne and Tournier was initially not accepted by the ICTV because the small genome size of viruses and their high rate of mutation made it difficult to determine their ancestry beyond order. As such, the Baltimore classification system has come to be used to supplement the more traditional hierarchy. [ 86 ] Starting in 2018, the ICTV began to acknowledge deeper evolutionary relationships between viruses that have been discovered over time and adopted a 15-rank classification system ranging from realm to species. [ 87 ] Additionally, some species within the same genus are grouped into a genogroup . [ 88 ] [ 89 ]
The ICTV developed the current classification system and wrote guidelines that put a greater weight on certain virus properties to maintain family uniformity. A unified taxonomy (a universal system for classifying viruses) has been established. Only a small part of the total diversity of viruses has been studied. [ 90 ] As of 2021, 6 realms, 10 kingdoms, 17 phyla, 2 subphyla, 39 classes, 65 orders, 8 suborders, 233 families, 168 subfamilies , 2,606 genera, 84 subgenera , and 10,434 species of viruses have been defined by the ICTV. [ 91 ]
The general taxonomic structure of taxon ranges and the suffixes used in taxonomic names are shown hereafter. As of 2021, the ranks of subrealm, subkingdom, and subclass are unused, whereas all other ranks are in use. [ 91 ]
The Nobel Prize-winning biologist David Baltimore devised the Baltimore classification system. [ 92 ]
The Baltimore classification of viruses is based on the mechanism of mRNA production. Viruses must generate mRNAs from their genomes to produce proteins and replicate themselves, but different mechanisms are used to achieve this in each virus family. Viral genomes may be single-stranded (ss) or double-stranded (ds), RNA or DNA, and may or may not use reverse transcriptase (RT). In addition, ssRNA viruses may be either sense (+) or antisense (−). This classification places viruses into seven groups:
|
https://en.wikipedia.org/wiki/Virology
|
Virophysics is a branch of biophysics in which the theoretical concepts and experimental techniques of physics are applied to study the mechanics and dynamics driving the interactions between virions and cells. [ 1 ] [ 2 ] [ 3 ]
Research in virophysics typically focuses on resolving the physical structure and structural properties of viruses, the dynamics of their assembly and disassembly, their population kinetics over the course of an infection, and the emergence and evolution of various strains. [ 1 ] [ 2 ] [ 3 ] The common aim of these efforts is to establish a set of models (expressions or laws) that quantitatively describe the details of all processes involved in viral infections with reliable predictive power. Having such a quantitative understanding of viruses would not only rationalize the development of strategies to prevent, guide, or control the course of viral infections, but could also be used to exploit virus processes and put virus to work in areas such as nanosciences, materials, and biotechnologies.
Traditionally, in vivo and in vitro experimentation has been the only way to study viral infections. This approach for deriving knowledge based solely on experimental observations relies on common-sense assumptions (e.g., a higher virus count means a fitter virus). These assumptions often go untested due to difficulties controlling individual components of these complex systems without affecting others. The use of mathematical models and computer simulations to describe such systems, however, makes it possible to deconstruct an experimental system into individual components and determine how the pieces combine to create the infection we observe.
Virophysics has large overlaps with other fields. For example, the modelling of infectious disease dynamics is a popular research topic in mathematics, notably in applied mathematics or mathematical biology . While most modelling efforts in mathematics have focused on elucidating the dynamics of spread of infectious diseases at an epidemiological scale (person-to-person), there is also important work being done at the cellular scale (cell-to-cell). Virophysics focuses almost exclusively on the single-cell or multi-cellular scale, utilizing physical models to resolve the temporal and spatial dynamics of viral infection spread within a cell culture (in vitro), an organ (ex vivo or in vivo) or an entire host (in vivo).
This biophysics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virophysics
|
A virosome is a drug or vaccine delivery mechanism consisting of unilamellar phospholipid membrane (either a mono- or bi-layer) vesicle incorporating virus derived proteins to allow the virosomes to fuse with target cells. Viruses are infectious agents that can replicate in their host organism, however virosomes do not replicate. The properties that virosomes share with viruses are based on their structure; virosomes are essentially safely modified viral envelopes that contain the phospholipid membrane and surface glycoproteins. As a drug or vaccine delivery mechanism they are biologically compatible with many host organisms and are also biodegradable. The use of reconstituted virally derived proteins in the formation of the virosome allows for the utilization of what would otherwise be the immunogenic properties of a live-attenuated virus, but is instead a safely killed virus. [ 1 ] A safely killed virus can serve as a promising vector because it won't cause infection and the viral structure allows the virosome to recognize specific components of its target cells. [ citation needed ]
Virosomes are vehicles that have a spherical shape with a phospholipid mono/bilayer membrane. Inside of the virosome, there is a central cavity that holds the therapeutic molecules such as nucleic acids, proteins, and drugs. [ 2 ] On the surface of the virosome, there can be different types of glycoproteins . Glycoproteins are a type of protein that have an oligosaccharide chain bonded to amino acid chains. The different types of glycoproteins on the surface of the virosome increases the specificity of the target cells because the surface glycoproteins help with recognition as well as the attachments of the virosomes to their target cells. In the case of the influenza virosome, the glycoproteins are antigen , haemagglutinin , and neuraminidase . Antigens are molecules that triggers an immune response when targeted by a specific antibody that corresponds to the shape of the antigen. [ 3 ] Haemagglutinin is a viral glycoprotein that causes red blood cell agglutination. [ 4 ] Neuraminidase are enzymes that break glycosidic linkages. [ 5 ] The size and surface molecules presented on of the virosome can be modified so that it can target different types of cells. [ 2 ]
Virosomes deliver antigens and therapeutic agents to their targeted cells. Virosomes can act as immunopotentiating agents and as agents of targeted drug delivery. Virosomes as immunopotentiating agents activate cell mediated and humoral immune responses . Virosomes are suspended in saline buffers and are administered through respiratory, parenteral, intravenous, oral, intramuscular, and topical routes. [ 2 ]
In contrast to liposomes , virosomes contain functional viral envelope glycoproteins : influenza virus hemagglutinin (HA) and neuraminidase (NA) intercalated in the phospholipid bilayer membrane. They have a typical mean diameter of 150 nm. Essentially, virosomes represent reconstituted empty influenza virus envelopes, devoid of the nucleocapsid including the genetic material of the source virus . [ 6 ]
They are also being considered for HIV-1 vaccine research. [ 7 ]
They were used as a drug carrier mechanism for experimental cancer therapies. [ 8 ]
The benefits of virosomes are that the specific structure and small size help with the precision of target cells. The phospholipid membrane protects the virosome from adverse reactions in the body and the membrane allows the virosome to be biocompatible and biodegradable in the body. [ 2 ] The challenges of virosomes are the rapid detection and activation of the immune response against the viral glycoproteins, which can result in a decrease of the virosomes. However, glycoproteins can still induce a prophylactic response against the virus, which helps with establishing virosomes as vaccine delivery systems. [ 2 ] If the virosome is administered into the bloodstream, the virosome can disintegrate. However, if the virosome can reach the target quickly enough, the drug delivery will still happen. There are some challenges with virosomes, but there are ways in which the virosome can still help activate the immune response. [ citation needed ]
|
https://en.wikipedia.org/wiki/Virosome
|
Virosphere ( virus diversity, virus world, global virosphere) was coined to refer to all those places in which viruses are found or which are affected by viruses. [ 1 ] [ 2 ] However, more recently virosphere has also been used to refer to the pool of viruses that occurs in all hosts and all environments, [ 3 ] as well as viruses associated with specific types of hosts ( prokaryotic virosphere, [ 4 ] archaeal virosphere, [ 5 ] Invertebrate virosphere), [ 6 ] type of genome ( RNA virosphere, [ 7 ] dsDNA virosphere) [ 8 ] or ecological niche (marine virosphere). [ 9 ]
The scope of viral genome diversity is enormous compared to cellular life. Cellular life including all known organisms have double stranded DNA genome. Whereas viruses have one of at least 7 different types of genetic information , namely dsDNA, ssDNA, dsRNA, ssRNA+, ssRNA-, ssRNA-RT, dsDNA-RT. Each type of genetic information has its specific manner of mRNA synthesis. Baltimore classification is a system providing overview on these mechanisms for each type of genome. Moreover, in contrast to cellular organisms, viruses don't have universally conserved sequences in their genomes to be compared by. [ citation needed ]
Viral genome size varies approximately 1000 fold. Smallest viruses may consist of only from 1–2 kb genome coding for 1 or 2 genes and it is enough for them to successfully evolve and travel through space and time by infecting and replicating (make copies of their own) in its host. Two most basic viral genes are replicase gene and capsid protein gene, as soon as virus has them it represents a biological entity able to evolve and reproduce in cellular life forms. Some viruses may have only replicase gene and use capsid gene of other e.g. endogenous virus . Most viral genomes are 10-100kb, whereas bacteriophages tend to have larger genomes carrying parts of genome translation machinery genes from their host. In contrast, RNA viruses have smaller genomes, with maximum 35kb by coronavirus . RNA genomes have higher mutation rate, that is why their genome has to be small enough in order not to harbour to many mutations, which would disrupt the essential genes or their parts. [ 10 ] The function of the vast majority of viral genes remain unknown and the approaches to study have to be developed. [ 11 ] The total number of viral genes is much higher, than the total number of genes of three domains of life all together, which practically means viruses encode most of the genetic diversity on the planet. [ 12 ]
Viruses are cosmopolites , they are able to infect every cell and every organism on planet earth. However different viruses infect different hosts. Viruses are host specific as they need to replicate (reproduce) within a host cell. In order to enter the cell viral particle needs to interact with a receptor on the surface of its host cell. For the process of replication many viruses use their own replicases, but for protein synthesis they are dependent on their host cell protein synthesis machinery. Thus, host specificity is a limiting factor for viral reproduction. [ citation needed ]
Some viruses have extremely narrow host range and are able to infect only 1 certain strain of 1 bacterial species, whereas others are able to infect hundreds or even thousands of different hosts. For example cucumber mosaic virus (CMV) can use more than 1000 different plant species as a host. [ 13 ] Members of viral families like Rhabdoviridae infect hosts from different kingdoms e.g. plants and vertebrates. [ 14 ] And members of genera Psimunavirus and Myohalovirus infect hosts from different domains of life e.g. bacteria and archaea. [ 15 ]
Capsid is the outer protecting shell or scaffold of a viral genome. Capsid enclosing viral nucleic acid make up viral particle or a virion. Capsid is made of protein and sometimes has lipid layer harboured from the host cell while exiting it. Capsid proteins are highly symmetrical and assemble within a host cell by their own due to the fact, that assembled capsid is more thermodynamically favourable state, than separate randomly floating proteins. The most viral capsids have icosahedral or helical symmetry, whereas bacteriophages have complex structure consisting of icosahedral head and helical tail including baseplate and fibers important for host cell recognition and penetration. [ 16 ] Viruses of archaea infecting hosts living in extreme environments like boiling water, highly saline or acidic environments have totally different capsid shapes and structures. The variety of capsid structures of Archaeal viruses includes lemon shaped viruses Bicaudaviridae of family and Salterprovirus genus, spindle form Fuselloviridae , bottle shaped Ampullaviridae , egg shaped Guttaviridae . [ 5 ]
Capsid size of a virus differs dramatically depending on its genome size and capsid type.Icosahedral capsids are measured by diameter, whereas helical and complex are measured by length and diameter. Viruses differ in capsid size in a spectrum from 10 to more than 1000 nm. The smallest viruses are ssRNA viruses like Parvovirus . They have icosahedral capsid approximately 14 nm in diameter. Whereas the biggest currently known viruses are Pithovirus , Mamavirus and Pandoravirus . Pithovirus is a flask-shaped virus 1500 nm long and 500 nm in diameter, Pandoravirus is an oval-shaped virus1000nm (1 micron) long and Mamavirus is an icosahedral virus reaching approximately 500 nm in diameter. [ 17 ] Example of how capsid size depends on the size of viral genome can be shown by comparing icosahedral viruses - the smallest viruses are 15-30 nm in diameter have genomes in range of 5 to 15 kb (kilo bases or kilo base pairs depending on type of genome), and the biggest are near 500 nm in diameter and their genomes are also the largest, they exceed1Mb (million base pairs). [ citation needed ]
Viral evolution or evolution of viruses presumably started from the beginning of the second age of RNA world, when different types of viral genomes arose through the transition from RNA- RT –DNA, which also emphasises that viruses played a critical role in the emergence of DNA and predate LUCA [ 18 ] [ 19 ] The abundance and variety of viral genes also imply that their origin predates LUCA . [ 20 ] As viruses do not share unifying common genes they are considered to be polyphyletic or having multiple origins as opposed to one common origin as all cellular life forms have. [ 21 ] [ 22 ] Virus evolution is more complex as it is highly prone to horizontal gene transfer , genetic recombination and reassortment . Moreover viral evolution should always be considered as a process of co-evolution with its host, as a host cell is inevitable for virus reproduction and hence, evolution . [ citation needed ]
Viruses are the most abundant biological entities, there are 10^31 viruses on our planet. [ 23 ] [ 24 ] Viruses are capable of infecting all organisms on earth and they are able to survive in much harsher environments, than any cellular life form. As viruses can not be included in the tree of life there is no separate structure illustrating viral diversity and evolutionary relationships. [ 25 ] However, viral ubiquity can be imagined as a virosphere covering the whole tree of life. [ citation needed ]
Nowadays we are entering the phase of exponential viral discovery. The genome sequencing technologies including high-throughput methods allow fast and cheap sequencing of environmental samples. The vast majority of the sequences from any environment, both from wild nature and human made, reservoirs are new. [ 26 ] [ 27 ] It practically means that during over 100 years of virus research from the discovery of bacteriophages - viruses of bacteria in 1917 until current time we only scratched on a surface of a great viral diversity. The classic methods like viral culture used previously allowed to observe physical virions or viral particles using electron microscope , they also allowed to gathering information about their physical and molecular properties. New methods deal only with the genetic information of viruses. [ citation needed ]
|
https://en.wikipedia.org/wiki/Virosphere
|
Virotherapy is a treatment using biotechnology to convert viruses into therapeutic agents by reprogramming viruses to treat diseases. There are three main branches of virotherapy: anti-cancer oncolytic viruses , viral vectors for gene therapy and viral immunotherapy . These branches use three different types of treatment methods: gene overexpression, gene knockout, and suicide gene delivery. Gene overexpression adds genetic sequences that compensate for low to zero levels of needed gene expression. Gene knockout uses RNA methods to silence or reduce expression of disease-causing genes. Suicide gene delivery introduces genetic sequences that induce an apoptotic response in cells, usually to kill cancerous growths. [ 1 ] In a slightly different context, virotherapy can also refer more broadly to the use of viruses to treat certain medical conditions by killing pathogens.
Chester M. Southam , a researcher at Memorial Sloan Kettering Cancer Center , pioneered the study of viruses as potential agents to treat cancer. [ 2 ]
Oncolytic virotherapy is not a new idea – as early as the mid 1950s doctors were noticing that cancer patients who suffered a non-related viral infection, or who had been vaccinated recently, showed signs of improvement; [ 3 ] this has been largely attributed to the production of interferon and tumour necrosis factors in response to viral infection, but oncolytic viruses are being designed that selectively target and lyse only cancerous cells. [ citation needed ]
In the 1940s and 1950s, studies were conducted in animal models to evaluate the use of viruses in the treatment of tumours . [ 4 ] In the 1940s–1950s some of the earliest human clinical trials with oncolytic viruses were started. [ 5 ] [ 6 ]
It is believed that oncolytic virus achieve their goals by two mechanisms: selective killing of tumor cells as well as recruitment of host immune system . [ 7 ] [ 8 ] One of the major challenges in cancer treatment is finding treatments that target tumor cells while ignoring non-cancerous host cells. Viruses are chosen because they can target specific receptors expressed by cancer cells that allow for virus entry. One example of this is the targeting of CD46 on multiple myeloma cells by measles virus. [ 9 ] The expression of these receptors are often increased in tumor cells. [ 8 ] Viruses can also be engineered to target specific receptors on tumor cells as well. [ 8 ] Once viruses have entered the tumor cell, the rapid growth and division of tumor cells as well as decreased ability of tumor cells to fight off viruses make them advantageous for viral replication compared to non-tumorous cells. [ 7 ] [ 8 ] The replication of viruses in tumor cells causes tumor cells to lyse killing them and also release signal to activate the host's own immune system, overcoming immunosuppression . This is done through the disruption of the microenvironment of the tumor cells that prevents recognition by host immune cells. [ 8 ] Tumor antigens and danger-associated molecular patterns are also released during the lysis process which helps recruit host immune cells. [ 8 ] Currently, there are many viruses being used and tested, all differing in their ability to lyse cells, activate the immune system, and transfer genes. [ citation needed ]
As of 2019, there are over 100 clinical trials looking at different viruses, cancers, doses, routes and administrations. Most of the work has been done on herpesvirus, adenovirus, and vaccinia virus, but other viruses include measles virus, coxsackievirus, polio virus, newcastle disease virus, and more. [ 8 ] [ 10 ] Methods of delivery tested include intratumoral, intravenous, intraperitoneal, and more. [ 11 ] Types of tumor that are currently being study with oncolytic viruses include CNS tumors, renal cancer, head and neck cancer, ovarian cancer, and more. [ 10 ] Oncolytic virotherapy as a monotherapy has also been tested in combination with other therapies including chemotherapy, radiotherapy, surgery, and immunotherapy. [ 8 ] [ 10 ]
In 2015 the FDA approved the marketing of talimogene laherparepvec , a genetically engineered herpes virus, to treat melanoma lesions that cannot be operated on; as of 2019, it is the only oncolytic virus approved for clinical use. It is injected directly into the lesion. [ 12 ] As of 2016 there was no evidence that it extends the life of people with melanoma, or that it prevents metastasis. [ 13 ] Two genes were removed from the virus – one that shuts down an individual cell's defenses, and another that helps the virus evade the immune system – and a gene for human GM-CSF was added. The drug works by replicating in cancer cells, causing them to burst; it was also designed to stimulate an immune response but as of 2016, there was no evidence of this. [ 14 ] [ 12 ] The drug was created and initially developed by BioVex, Inc. and was continued by Amgen , which acquired BioVex in 2011. [ 15 ] It was the first oncolytic virus approved in the West. [ 14 ]
RIGVIR is a virotherapy drug that was approved by the State Agency of Medicines of the Republic of Latvia in 2004. [ 16 ] It is wild type ECHO-7, a member of echovirus family. [ 17 ] The potential use of echovirus as an oncolytic virus to treat cancer was discovered by Latvian scientist Aina Muceniece in the 1960s and 1970s. [ 17 ] The data used to register the drug in Latvia is not sufficient to obtain approval to use it in the US, Europe, or Japan. [ 17 ] [ 18 ] As of 2017 there was no good evidence that RIGVIR is an effective cancer treatment . [ 19 ] [ 20 ] On March 19, 2019, the manufacturer of ECHO-7, SIA LATIMA, announced the drug's removal from sale in Latvia, quoting financial and strategic reasons and insufficient profitability. [ 21 ] However, several days later an investigative TV show revealed that State Agency of Medicines had run laboratory tests on the vials, and found that the amount of ECHO-7 virus is of a much smaller amount than claimed by the manufacturer. In March 2019, the distribution of ECHO-7 in Latvia has been stopped. [ 22 ]
Although oncolytic viruses are engineered to specifically target tumor cells, there is always the potential for off-target effects leading to symptoms that are usually associated with that virus. [ 7 ] The most common symptom that has been reported has been flu-like symptoms. The HSV virus used as an oncolytic virus has retained their native thymidine kinase gene which allows it to be targeted with antiviral therapy in the event of unwarranted side effects. [ 8 ]
Other challenges include developing an optimal method of delivery either directly to the tumor site or intravenously and allowing for target of multiple sites. [ 8 ] Clinical trials include the tracking of viral replication and spread using various laboratory techniques in order to find the optimal treatment. [ citation needed ]
Another major challenge with using oncolytic viruses as therapy is avoiding the host's natural immune system which will prevent the virus from infecting the tumor cells. [ 7 ] [ 8 ] Once the oncolytic virus is introduced to the host system, a healthy host's immune system will naturally try to fight off the virus. Because of this, if less virus is able to reach the target site, it can reduce the efficacy of the oncolytic virus. This leads to the idea that inhibiting the host's immune response may be necessary early in the treatment, but this is brought with safety concerns. Due to these safety concerns of immunosuppression , clinical trials have excluded patients who are immunocompromised and have active viral infections. [ citation needed ]
Viral gene therapy uses genetically engineered viral vectors to deliver therapeutic genes to cells with genetic malfunctions. [ 23 ]
The use of viral material to deliver a gene starts with the engineering of the viral vector. Though the molecular mechanism of the viral vector differ from vector to vector, there are some general principles that are considered. [ citation needed ]
In diseases that are secondary to a genetic mutation that causes the lack of a gene, the gene is added back in. [ 24 ] [ 25 ] [ 26 ] In diseases that are due to the overexpression of a gene, viral genetic engineering may be introduced to turn off the gene. [ 24 ] [ 25 ] [ 26 ] Viral gene therapy may be done in vivo or ex vivo. [ 23 ] [ 27 ] In the former, the viral vector is delivered directly to the organ or the tissue of the patient. In the later, the desired tissue is first retrieved, genetically modified, and then transferred back to the patient. The molecular mechanisms of gene delivery and/or integration into cells vary based on the viral vector that is used. [ 23 ] Rather than delivery of drugs that require multiple and continuous treatments. Delivery of a gene has the potential to create a long lasting cell that can continuously produce gene product. [ 24 ]
There has been a few successful clinical use of viral gene therapy since the 2000s, specifically with adeno-associated virus vectors and chimeric antigen receptor T-cell therapy. [ citation needed ]
Vectors made from Adeno-associated virus are one of the most established products used in clinical trials today. It was initially attractive for the use of gene therapy due to it not being known to cause any disease along with several other features. [ 27 ] It has also been engineered so that it does not replicate after the delivery of the gene. [ 27 ]
In additional, other clinical trials involving AAV-gene therapy looks to treat diseases such as Haemophilia along with various neurological, cardiovascular, and muscular diseases. [ 27 ]
Chimeric antigen receptor T cell (CAR T cell) are a type of immunotherapy that makes use of viral gene editing. CAR T cell use an ex vivo method in which T lymphocytes are extracted and engineered with a virus typically gammaretrovirus or lentivirus to recognize specific proteins on cell surfaces. [ 24 ] [ 34 ] This causes the T-lymphocytes to attack the cells that express the undesired protein. Currently two therapies, Tisagenlecleucel and Axicabtagene ciloleucel are FDA-approved to treat acute lymphoblastic leukemia and diffuse large B-cell lymphoma respectively. [ 24 ] Clinical trials are underway to explore its potential benefits in solid malignancies. [ 24 ]
In 2012 the European Commission approved Glybera , an AAV vector-based gene therapy product for the treatment of lipoprotein lipase deficiency in adults. [ 35 ] It was the first gene therapy approved in the EU. [ 36 ] The drug never received FDA approval in the US, and was discontinued by its manufacturer uniQure in 2017 due to profitability concerns. [ 37 ] As of 2019 [update] it is no longer authorized for use in the EU. [ 35 ]
Currently, there are still many challenges of viral gene therapy. Immune responses to viral gene therapies pose a challenge to successful treatment. [ 38 ] However, responses to viral vectors at immune privileged sites such as the eye may be reduced compared to other sites of the body. [ 38 ] [ 39 ] As with other forms of virotherapy, prevention of off-target genome editing is a concern. In addition to viral gene editing, other genome editing technologies such as CRISPR gene editing have been shown to be more precise with more control over the delivery of genes. [ 24 ] As genome editing become a reality, it is also necessary to consider the ethical implications of the technology.
Viral immunotherapy is the use of virus to stimulate the body's immune system. Unlike traditional vaccines , in which attenuated or killed virus/bacteria is used to generate an immune response, viral immunotherapy uses genetically engineered viruses to present a specific antigen to the immune system. That antigen could be from any species of virus/bacteria or even human disease antigens, for example cancer antigens. [ citation needed ]
Vaccines are another method of virotherapy that use attenuated or inactivated viruses to develop immunity to disease. An attenuated virus is a weakened virus that incites a natural immune response in the host that is often undetectable. The host also develops potentially life-long immunity due to the attenuated virus's similarity to the actual virus. Inactivated viruses are killed viruses that present a form of the antigen to the host. However, long-term immune response is limited. [ 40 ]
Viral immunotherapy in the context of cancer stimulates the body's immune system to better fight against cancer cells. Rather than preventing causes of cancer, as one would traditionally think in the context of vaccines, vaccines against cancer are used to treat cancer. [ 41 ] The mechanism is dependent upon the virus and treatment. Oncolytic viruses, as discussed in previous section, is stimulate host immune system through the release of tumor-associated antigens upon lysis as well as through the disruption of the cancer's microenvironment which helps them avoid the host immune system. [ 8 ] CAR T Cells, also mentioned in previous section, is another form of viral immunotherapy that uses viruses to genetically engineer immune cells to kill cancer cells. [ 24 ]
Viruses have been explored as a means to treat infections caused by protozoa . [ 42 ] [ 43 ] One such protozoa that potential virotherapy treatments have explored is Naegleria fowleri , which causes primary amebic meningoencephalitis (PAM). With a mortality rate of 95%, this disease-causing eukaryote has one of the highest pathogenic fatality rates known. Chemotherapeutic agents that target this amoeba for treating PAM have difficulty crossing blood-brain barriers. However, virulent viruses of protozoal pathogens (VVPPs) can be used as viral therapies that can more easily access this eukaryotic disease organism by crossing the blood-brain barrier in a process analogous to bacteriophages . These VVPPs would also be self-replicating and therefore require infrequent administration, with lower doses, thus potentially reducing toxicity. [ 42 ] While these treatment methods for protozoal disease may show great promise in a manner similar to bacteriophage viral therapy, a notable hazard is the evolutionary consequence of using viruses capable of eukaryotic pathogenicity. VVPPs will have evolved mechanisms of DNA insertion and replication that manipulate eukaryotic surface proteins and DNA editing proteins. VVPP engineering must therefore control for viruses that may be able to mutate and thereby bind to surface proteins and manipulate the DNA of the infected host. [ citation needed ]
|
https://en.wikipedia.org/wiki/Virotherapy
|
Virtual Cell (VCell) [ 1 ] [ 2 ] [ 3 ] [ 4 ] is an open-source software platform for modeling and simulation of living organisms , primarily cells . It has been designed to be a tool for a wide range of scientists, from experimental cell biologists to theoretical biophysicists . [ 5 ]
Virtual Cell is an advanced software platform for modeling and simulating reaction kinetics, membrane transport and diffusion in the complex geometries of cells and multicellular tissues. VCell models have a hierarchical tree structure. The trunk level is the " Physiology " consisting of compartments, species and chemical reactions, and reaction rates that are functions of concentrations. Given initial concentrations of species, VCell can calculate how these concentrations change over time. How these numerical simulations are performed, is determined through a number of " Applications ", which specify whether simulations will be deterministic or stochastic, and spatial or compartmental; multiple "Applications" can also specify initial concentrations, diffusion coefficients, flow rates and a variety of modeling assumptions. Thus " Applications " can be viewed as computational experiments to test ideas about the physiological system. Each " Application " corresponds to a mathematical description, which is automatically translated into the VCell Math Description Language. Multiple " Simulations ", including parameter scans and changes in solver specifications, can be run within each " Application ".
Models can range from the simple to the highly complex, and can represent a mixture of experimental data and purely theoretical assumptions.
The Virtual Cell can be used as a distributed application over the Internet or as a standalone application. The graphical user interface allows construction of complex models in biologically relevant terms: compartment dimensions and shape, molecular characteristics, and interaction parameters. VCell converts the biological description into an equivalent mathematical system of differential equations. Users can switch back-and-forth between the schematic biological view and the mathematical view in the common graphical interface. Indeed, if users desire, they can manipulate the mathematical description directly, bypassing the schematic view. VCell allows users a choice of numerical solvers to translate the mathematical description into software code which is executed to perform the simulations. The results can be displayed on-line, or they can be downloaded to the user's computer in a wide variety of export formats. The Virtual Cell license allows free access to all members of the scientific community. [ 6 ]
Users may save their models in the VCell DataBase, which is maintained on servers at U. Connecticut. The VCell Database uses an access control system with permissions to allow users to maintain their models private, share them with select collaborators or make them public. The VCell website maintains a searchable list of models that are public and associated with research publications.
VCell supports the following features:
VCell allows users integrated access to a variety of sources to help build and annotate models:
The Virtual Cell is being developed at the R. D Berlin Center for Cell Analysis and Modeling at the University of Connecticut Health Center . [ 16 ] The team is primarily funded through research grants through the National Institutes of Health .
|
https://en.wikipedia.org/wiki/Virtual_Cell
|
The Virtual Cybernetic Building Testbed (VCBT) is a whole building emulator located at the National Institute of Standards and Technology in Gaithersburg, Maryland . It is designed with enough flexibility to be capable of reproducibly simulating normal operation and a variety of faulty and hazardous conditions that might occur in a cybernetic building. It serves as a testbed for investigating the interactions between integrated building systems and a wide range of issues important to the development of cybernetic building technology.
The VCBT consists of a variety of simulation models that together emulate the
characteristics and performance of a cybernetic building system. The simulation models are
interfaced to real state-of-the-art BACnet speaking control systems to provide a hybrid
software/hardware testbed that can be used to develop and evaluate control strategies and control
products that use the BACnet communication protocol . The simulation models used are based on
versions of HVACSIM+ and CFAST.
[ 1 ] [ 2 ] [ 3 ]
|
https://en.wikipedia.org/wiki/Virtual_Cybernetic_Building_Testbed
|
In computing , Virtual DMA Services ( VDS ) refer to an application programming interface that allow DOS and Win16 applications and device drivers to perform DMA operations while running under protected or virtual 8086 mode .
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virtual_DMA_Services
|
A Virtual Interface Adapter ( VIA ) is a network protocol (such as TCP/IP ...). As of July 2006 Microsoft SQL Server 2005 supports it. The specific implementation of VIA will vary from vendor to vendor. In general, it is usually a network kind of interface but is usually a very high-performance, dedicated connection between two systems. Part of that high performance comes from specialized, dedicated hardware that knows that it has a dedicated connection and therefore doesn't have to deal with normal network addressing issues. [ 1 ]
The VIA protocol is used to support VIA devices such as VIA Storage Area Network devices.
Comes in the concept of clustering (i.e.) load balancing method.
The load balancer will have this VIA and through VIA it will connect the databases.
The VIA protocol is deprecated by Microsoft, and will be removed in a future version of Microsoft SQL Server. It is however supported in SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, and SQL Server 2014. [ 2 ] [ 3 ]
This computer networking article is a stub . You can help Wikipedia by expanding it .
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virtual_Interface_Adapter
|
The Virtual Interface Architecture ( VIA ) is an abstract model of a user-level zero-copy network , and is the basis for InfiniBand , iWARP and RoCE . Created by Microsoft , Intel , and Compaq , the original VIA sought to standardize the interface for high-performance network technologies known as System Area Networks (SANs; not to be confused with Storage Area Networks ).
Networks are a shared resource. With traditional network APIs such as the Berkeley socket API , the kernel is involved in every network communication. This presents a tremendous performance bottleneck when latency is an issue.
One of the classic developments in computing systems is virtual memory , a combination of hardware and software that creates the illusion of private memory for each process. In the same school of thought, a virtual network interface protected across process boundaries could be accessed at the user level. With this technology, the "consumer" manages its own buffers and communication schedule while the "provider" handles the protection.
Thus, the network interface card (NIC) provides a "private network" for a process, and a process is usually allowed to have multiple such networks. The virtual interface (VI) of VIA refers to this network and is merely the destination of the user's communication requests. Communication takes place over a pair of VIs, one on each of the processing nodes involved in the transmission. In "kernel-bypass" communication, the user manages its own buffers.
Another facet of traditional networks is that arriving data is placed in a pre-allocated buffer and then copied to the user-specified final destination. Copying large messages can take a long time, and so eliminating this step is beneficial. Another classic development in computing systems is direct memory access (DMA), in which a device can access main memory directly while the CPU is free to perform other tasks.
In a network with "remote direct memory access" ( RDMA ), the sending NIC uses DMA to read data in the user-specified buffer and transmit it as a self-contained message across the network. The receiving NIC then uses DMA to place the data into the user-specified buffer. There is no intermediary copying and all of these actions occur without the involvement of the CPUs, which has an added benefit of lower CPU utilization.
For the NIC to actually access the data through DMA, the user's page must be in memory. In VIA, the user must "pin-down" its buffers before transmission, so as to prevent the OS from swapping the page out to the disk. This action—one of the few that involve the kernel—ties the page to physical memory. To ensure that only the process that owns the registered memory may access it, the VIA NICs require permission keys known as "protection tags" during communication.
So essentially VIA is a standard that defines kernel bypassing and RDMA in a network. It also defines a programming library called "VIPL". It has been implemented, most notably in cLAN from Giganet (now Emulex ). Mostly though, VIA's major contribution has been in providing a basis for the InfiniBand , iWARP and RoCE standards.
This computer networking article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virtual_Interface_Architecture
|
The IBM Virtual Machine Communication Facility (VMCF) is a feature of the VM/370 operating system introduced in Release 3 in 1976. It "provides a method of communication and data transfer between virtual machines operating under the same VM/370 system." [ 1 ]
VMCF uses paravirtualization through the diagnose instruction VMCF SEND function to send data, in blocks of up to 2048 bytes, from one virtual machine to another. The receiving virtual machine accesses the data thru the diagnose RECEIVE function. It provides a simpler interface and greater performance than the prior use of virtual channel-to-channel adapters for the same purpose. [ 2 ]
VMCF was superseded by the Inter User Communication Vehicle (IUCV), introduced in 1980 with VM/SP .
This software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virtual_Machine_Communication_Facility
|
Virtual Organization for Innovative Conceptual Engineering Design ( VOICED ) is a virtual organization that promotes innovation in engineering design. This project is the collaborative work of researchers at five universities across the United States , and is funded by the National Science Foundation . The goal of this virtual organization is to facilitate the sharing of design information between often geographically dispersed engineers and designers through the use of a robust and sophisticated design repository. Additionally, functional data can be mapped to historical failure data [ 1 ] and possible components [ 2 ] to create a conceptual design.
The end goal is to turn VOICED into a tool that allows engineers to create conceptual designs based on archived designs and detect failures in those design through an open design repository (Tumer & Stone, n.d.). VOICED is a fairly new organization, being about 3–4 years old, however the concepts that underlie the organization have been under development for much longer. [ 3 ]
Collaborative Research: VOICED - A Virtual Organization for Innovative Conceptual Engineering Design
|
https://en.wikipedia.org/wiki/Virtual_Organization_for_Innovative_Conceptual_Engineering_Design
|
The Virtual Physiological Human ( VPH ) is a European initiative that focuses on a methodological and technological framework that, once established, will enable collaborative investigation of the human body as a single complex system . [ 1 ] [ 2 ] The collective framework will make it possible to share resources and observations formed by institutions and organizations, creating disparate but integrated computer models of the mechanical, physical and biochemical functions of a living human body.
VPH is a framework which aims to be descriptive, integrative and predictive. [ 3 ] [ 4 ] [ 5 ] [ 6 ] Clapworthy et al. state that the framework should be descriptive by allowing laboratory and healthcare observations around the world "to be collected, catalogued, organized, shared and combined in any possible way." [ 5 ] It should be integrative by enabling those observations to be collaboratively analyzed by related professionals in order to create "systemic hypotheses." [ 5 ] Finally, it should be predictive by encouraging interconnections between extensible and scalable predictive models and "systemic networks that solidify those systemic hypotheses" while allowing observational comparison. [ 5 ]
The framework is formed by large collections of anatomical , physiological , and pathological data stored in digital format, typically by predictive simulations developed from these collections and by services intended to support researchers in the creation and maintenance of these models, as well as in the creation of end-user technologies to be used in the clinical practice. VPH models aim to integrate physiological processes across different length and time scales (multi-scale modelling). [ 3 ] These models make possible the combination of patient-specific data with population-based representations. The objective is to develop a systemic approach which avoids a reductionist approach and seeks not to subdivide biological systems in any particular way by dimensional scale (body, organ, tissue, cells, molecules), by scientific discipline ( biology , physiology , biophysics , biochemistry , molecular biology , bioengineering ) or anatomical sub-system ( cardiovascular , musculoskeletal, gastrointestinal, etc.). [ 5 ]
The initial concepts that led to the Virtual Physiological Human initiative came from the IUPS Physiome Project . The project was started in 1997 and represented the first worldwide effort to define the physiome through the development of databases and models which facilitated the understanding of the integrative function of cells, organs, and organisms. [ 7 ] The project focused on compiling and providing a central repository of databases that would link experimental information and computational models from many laboratories into a single, self-consistent framework.
Following the launch of the Physiome Project, there were many other worldwide initiatives of loosely coupled actions all focusing on the development of methods for modelling and simulation of human pathophysiology. In 2005, an expert workshop of the Physiome was held as part of the Functional Imaging and Modelling of the Heart Conference in Barcelona where a white paper [ 8 ] entitled Towards Virtual Physiological Human: Multilevel modelling and simulation of the human anatomy and physiology was presented. The goal of this paper was to shape a clear overview of on-going relevant VPH activities, to build a consensus on how they can be complemented by new initiatives for researchers in the EU and to identify possible mid-term and long term research challenges.
In 2006, the European Commission funded a coordination and support action entitled STEP: Structuring The EuroPhysiome . The STEP consortium promoted a significant consensus process that involved more than 300 stakeholders including researchers, industry experts, policy makers, clinicians, etc. The prime result of this process was a booklet entitled Seeding the EuroPhysiome: A Roadmap to the Virtual Physiological Human . [ 6 ] The STEP action and the resulting research roadmap were instrumental in the development of the VHP concept and in the initiation of much larger process that involves significant research funding, large collaborative projects, and a number of connected initiatives, not only in Europe but also in the United States, Japan, and China.
VPH now forms a core target of the 7th Framework Programme [ 9 ] of the European Commission , and aims to support the development of patient-specific computer models and their application in personalised and predictive healthcare. [ 10 ] The Virtual Physiological Human Network of Excellence (VPH NoE) aims to connect the various VPH projects within the 7th Framework Programme.
VPH-related projects have received substantial funding from the European Commission in order to further scientific progress in this area. The European Commission is insistent that VPH-related projects demonstrate strong industrial participation and clearly indicate a route from basic science into clinical practice. [ 5 ] In the future, it is hoped that the VPH will eventually lead to a better healthcare system which aims to produce the following benefits: [ 6 ]
Personalized care solutions are a key aim of the VPH, with new modelling environments for predictive, individualized healthcare to result in better patient safety and drug efficacy. It is anticipated that the VPH could also result in healthcare improvement through greater understanding of pathophysiological processes. [ 3 ] The use of biomedical data from a patient to simulate potential treatments and outcomes could prevent the patient from experiencing unnecessary or ineffective treatments. [ 11 ] The use of in silico (by computer simulation) modelling and testing of drugs could also reduce the need for experiments on animals.
A future goal is that there also will be a more holistic approach to medicine with the body treated as a single multi-organ system rather than as a collection of individual organs. Advanced integrative tools should further help to improve the European healthcare system on a number of different levels that include diagnosis, treatment and care of patients and in particular quality of life. [ 6 ]
ImmunoGrid is a project funded by the EU under Framework 6, to model and simulate the human immune system using grid computing at different physiological levels. [ 12 ]
VPHOP (Osteoporotic Virtual Physiological Human) is a European Osteoporosis research project within the framework of the Virtual Physiological Human initiative. With current technology, osteoporotic fractures can be predicted with an accuracy of less than 70%. Better ways to prevent and diagnose osteoporotic fractures are needed.
Current fracture predictions are based on history and examination on the basis of which key factors are identified which contribute to the increased probability of an osteoporotic fracture. This approach oversimplifies the mechanisms leading to an osteoporotic fracture and fail to take into account numerous hierarchical factors which are unique to the individual. These factors range from cell-level to body-level functions. Musculoskeletal anatomy and neuromotor control define the daily loading spectrum, including paraphysiological overloading events. Fracture events occur at the organ level and are influenced by the elasticity and geometry of bone elasticity and geometry are determined by tissue morphology . Cell activity changes tissue morphology and composition over time. Constituents of the extracellular matrix are the prime determinants of tissue strength. Accuracy could be dramatically improved if a more deterministic approach was used that accounts for those factors and their variation between individuals.
The goal of the Osteoporotic Virtual Physiological Human is to improve the accuracy of these osteoporotic fracture prediction algorithms.
|
https://en.wikipedia.org/wiki/Virtual_Physiological_Human
|
Virtual USA (vUSA), is a joint federal and state collaboration on a project that would allow state and local on-line tools and technologies, such as caches of geospatial data, to be interoperable and more useful with the goal of creating a "Virtual USA" for emergency response purposes. [ 1 ] The initiative was developed by the DHS Directorate for Science and Technology (S&T), and currently operates as a pilot in eight states — Alabama , Georgia , Florida , Louisiana , Mississippi , Texas , Virginia and Tennessee — with plans to incorporate additional states. [ 2 ]
Virtual USA is part of the DHS' Open Government plan, which is part of the Obama administration's goal to promote a greater amount of transparency and openness between the government and citizens. [ 3 ]
The stated goal of Virtual USA is to aggregate existing data, from federal, state, local, tribal, and other information into a common operating picture to assist first responders during emergencies.
Virtual USA:
|
https://en.wikipedia.org/wiki/Virtual_USA
|
The Virtual breakdown mechanism is a concept in the field of electrochemistry . In electrochemical reactions, when the cathode and the anode are close enough to each other ( i.e. , so-called "nanogap electrochemical cells "), the double layer of the regions from the two electrodes is overlapped, forming a large electric field uniformly distributed inside the entire electrode gap. Such high electric fields can significantly enhance the ion migration inside bulk solutions and thus increase the entire reaction rate, akin to the " breakdown " of the reactant(s). However, it is fundamentally different from the traditional " breakdown ".
The Virtual breakdown mechanism was discovered in 2017 when researchers studied pure water electrolysis based on deep-sub- Debye-length nanogap electrochemical cells . Furthermore, researchers found the relation of the gap distance between cathodes and anodes to the performance of electrochemical reactions. [ 1 ]
The fundamental difference between traditional cells and nanogap cells is their electric potential distribution. This is the premise of the "virtual breakdown" effect.
For electrochemical reactions with high-concentration electrolyte in the macrosystem, the Debye-length is quite small. Due to the screening effect almost all of the potential drop is confined within the small Debye-length region (or double layer region). The potential in bulk solution (far from the electrodes) does not change too much, meaning that there is nearly zero electric field inside the bulk solution. However, when the counter electrode is within the Debye-length region ( i.e ., nanogap electrochemical cells), two double layers from anode and cathode overlap with each other. The electrostatic potential inside the entire gap changes dramatically, meaning that the huge electric field is uniformly distributed across the entire gap.
We shall consider pure water electrolysis as an example to explain the concept of the Virtual breakdown mechanism.
For the analysis of water electrolysis, we shall use H 3 O + ions (also known as oxonium ions ) at the cathode, as an example to explain the traditional reactions.
Water molecules self-ionize to H 3 O + and OH − ions. Near the cathode surface (within the double layer region), newly generated H 3 O + ions become hydrogen gas after obtaining electrons from the cathode; however because there is nearly no electric field inside the bulk solution (see section "Electric field distribution"), OH − ions can only transport through the bulk solution very slowly by diffusion . Moreover, in pure water the intrinsic H 3 O + concentration is only 10 −7 mol/L, not enough to neutralize the newly generated OH − ions. In this way OH − ions accumulate locally at the cathode surface (turning the solution near cathode into alkaline). Due to Le Chatelier's principle for water self-ionization ,
H 3 O + + OH − ↽ − − ⇀ 2 H 2 O {\displaystyle {\ce {H3O+ + OH- <=> 2H2O}}}
the OH − ions accumulation impede further self-ionization of the water, which reduces the hydrogen evolution rate and eventually prevents water electrolysis. In this case water electrolysis becomes very slow or even halts; this manifests as a large equivalent resistance between the two electrodes.
This is why in the macrosystem pure water cannot be electrolyzed efficiently - the fundamental reason is the lack of rapid ion transport inside the bulk solution. [ 1 ]
In nanogap cells the high electric field can distribute uniformly across the entire gap (see section "Electric field distribution"). This is different from ion transport in the macrosystem: now newly generated OH − ions can immediately migrate from cathode to anode. In the case where the two electrodes are close enough, the mass transport rate can be even larger than the electron-transfer rate. This results in OH − ions clustering for electron-transfer at the anode, rather than accumulating at the cathode. In this way the entire reaction can keep going and not self-limit.
Notice that for pure water electrolysis in nanogap cells, the net OH − ion accumulation near the anode not only increases the local reactant concentration but also decreases the overpotential requirement (as in the Frumkin effect ). [ 2 ] According to Butler–Volmer equation , such ion accumulation increases the electrolysis current, i.e. the water splitting throughput and efficiency.
Thus even pure water can be efficiently electrolyzed, when the electrode gap is small enough.
In reality water molecule dissociation (the splitting into H 3 O + and OH − ions) occurs only at the electrode region (because of the ions continuously consumed at the two electrodes); however it effectively appears that the molecules split in the middle of the gap, with H 3 O + ions migrating towards the cathode and OH − ions migrating towards the anode, respectively. The assistance of the huge electric field in the nano-gap (see section "Electric field distribution") not only increases the transport rate but also the water molecules' ionization has been enhanced ( i.e. local concentration has been enhanced). Looking from a microscopic perspective, the total effect appears like the breakdown of water molecules.
However this effect is not traditional breakdown, which in fact requires a much larger electric field around 1 V/Å. [ 3 ] In the nanogap cells the huge electric field is still not large enough to split water molecules directly. However it can take advantage of the self-ionization of water , facilitating the equilibrium reaction to shift in the ionization direction. [ 1 ]
2 H 2 O ⟶ H 3 O + + OH − {\displaystyle {\ce {2H2O -> H3O+ + OH-}}}
Such field-assisted ionization, with the fast ion transport (mainly migration ), performs very in a similar way to the breakdown of water molecules; that is why this field-assisted effect is named the "virtual breakdown mechanism".
Consider the equation of conductivity ,
σ = n q μ {\displaystyle \sigma =nq\mu }
Here the ion charges are not changed. The ion concentration is enhanced but only contributes to the conductivity partially. The fundamental change here is that "apparent mobility " has been significantly enhanced, as the " breakdown " effect. (In traditional electrochemical cells, although the ion intrinsic mobility is high, since there is nearly zero electric field inside bulk solution, it cannot contribute to the conductivity. ) Consider the equivalent resistance between the two electrodes, as given by:
R = ρ L S {\displaystyle R=\rho {L \over S}}
When we decrease the gap distance between the two electrodes, not only does the value of L decrease but also the value of resistivity decreases as well; this in fact contributes more to the decrease of the total resistance. [ 1 ]
This "virtual breakdown mechanism" can be applied to almost all kinds of weakly-ionized materials; in fact, such weaker ionization can lead to larger Debye-length inside the solution. At the same size scale it actually helps to achieve the virtual breakdown effect.
The phase diagram shows the importance of the electrode gap distance to the performance of electrochemical reactions. For traditional macrosystems, where the electrode gap distance is much larger than the Debye-length , two half-reactions are decoupled and cannot influence each other. Normally the electrochemical current is limited by a slow diffusion step. When the gap distance is reduced to around the Debye-length, a large electric field can form between the two electrodes (due to double layers and the two regions overlapping with each other); this enhances the mass transport rate. In this region the electrolysis current is very sensitive to the gap distance and the reactions are migration -rate limited. When the gap distance is further reduced to the deep-sub-Debye-length region, the mass transport can be enhanced further to a level even faster than the electron-transfer step. In this region, even when we shrink the gap distance further, the current cannot be enlarged any more, meaning that the current has reached saturation. Here the two half-reactions are coupled together and the reactions are limited by the electron-transfer steps.
Therefore, by just adjusting the gap distance, the fundamental performance of the electrochemical reactions can be significantly changed.
|
https://en.wikipedia.org/wiki/Virtual_breakdown_mechanism
|
Virtual design and construction (VDC) is the management of integrated multi-disciplinary performance models of design–construction projects, including the product (facilities), work processes , and organization of the design – construction – operation team to support explicit and public business objectives. [ 1 ] This is usually achieved creating a digital twin of the project, in where to manage the information.
The theoretical basis of VDC includes: [ 2 ]
"Virtual design and construction BIMs are virtual because they show computer-based descriptions of the project. The BIM project model emphasizes those aspects of the project that can be designed and managed, i.e., the product (typically a building or plant [and infrastructure]), the organization that will define, design, construct, and operate it, and the process the organization teams will follow, that is, the product–organization–process or POP. These models are logically integrated in the sense that they all can access shared data , and if a user highlights or changes an aspect of one, the integrated models can highlight or change the dependent aspects of related models. The models are multi-disciplinary in the sense that they represent the architect, engineering, construction (AEC), and owner of the project, as well as relevant sub-disciplines. The models are performance models in the sense that they predict some aspects of project performance, track many that are relevant, and can show predicted and measured performance in relationship to stated project performance objectives. Some companies now practice the first steps of BIM modeling, and they consistently find that they improve business performance by doing so." [ 3 ] Companies are also now considering developing BIMs in various levels of detail, since depending on the application of BIM, more or less detail is needed, and there is varying modeling effort associated with generating building information models at different levels of detail. [ 4 ]
Advances in construction engineering began with the ten volumes on architecture completed by Vitruvi , a 1 century B.C. Roman. Vitruvi laid the key and lasting foundation for a study of construction.
A principle of construction is a use of an applied ontology based in the upper ontology . In practice, these ontologies take on a form of breakdown structures such as the work breakdown structure . Usually breakdown structures form metadata to represent a construction activity; there are notable cases at exceptionally large construction companies where they are simply numbered. In practice, an ontology approach requires a semantic integration approach to construction data so to capture a present status of construction activities (i.e., the project).
The research that forms virtual design and construction (VDC) is based in scientific evidence and a validation measured against a best theory opposed to a best practice. This approach, pioneered by the illustrious Dr. Kunz, was a departure from earlier construction engineering methodologies that focused on studies of best practices. The scientific evidence method requires formulating a hypothesis and then testing that hypothesis to failure so to validate. A range of scientific methodologies have proven useful in construction engineering research, in both qualitative research and quantitative research . Because construction is difficult to replicate in a controlled setting, the case-based reasoning , case study and action research methodologies prevail. Power of a method is important to include in results; the case study is often broad and the action research is often focused.
A core concept in VDC is spacetime dimensions. There are four dimensions; three space dimensions and a fourth, time. There are additional dimensions of cost and quality, but a core is formed by these four. The four dimensions were first understood by Vitruvi as an importance of perspective (i.e., 3D) and time (i.e., 4D). Prior to computing, a focus was on the fourth dimension of time. In practice, time is a focus of the critical path method . With advances in computing, the representation of three dimensions of space has increased. The merging of space and the above discussed ontology formed the information model , in the construction engineering field, known as building information modeling . The combination of space and time in practice is shown by the linear scheduling method and in close relation the 4D model.
Computing brought about the advent of the need to align with a software developer. Previously, pencil and paper was forgiving on the mixing of methods from different schools of thought. Software is not as forgiving and to mix software requires this as a goal. This forms the field of interoperability research. The practical application is demonstrated by the Industry Foundation Classes .
Today, the most compelling advances in VDC are in computer vision ( List of computer vision topics ), artificial intelligence, and the architecture of transmission (AoT), an object-oriented project lifecycle management process, which acts as a counterpoint to commissioned IoT technologies. [2]
An important application of VDC is in the workzone. This is where the construction activities reside, and the workforce is a core component. To create an educated workforce with the technical know-how to use the technology tools now available, VDC includes the development of advanced vocational education topics.
|
https://en.wikipedia.org/wiki/Virtual_design_and_construction
|
In analytical mechanics , a branch of applied mathematics and physics , a virtual displacement (or infinitesimal variation ) δ γ {\displaystyle \delta \gamma } shows how the mechanical system's trajectory can hypothetically (hence the term virtual ) deviate very slightly from the actual trajectory γ {\displaystyle \gamma } of the system without violating the system's constraints. [ 1 ] [ 2 ] [ 3 ] : 263 For every time instant t , {\displaystyle t,} δ γ ( t ) {\displaystyle \delta \gamma (t)} is a vector tangential to the configuration space at the point γ ( t ) . {\displaystyle \gamma (t).} The vectors δ γ ( t ) {\displaystyle \delta \gamma (t)} show the directions in which γ ( t ) {\displaystyle \gamma (t)} can "go" without breaking the constraints.
For example, the virtual displacements of the system consisting of a single particle on a two-dimensional surface fill up the entire tangent plane, assuming there are no additional constraints.
If, however, the constraints require that all the trajectories γ {\displaystyle \gamma } pass through the given point q {\displaystyle \mathbf {q} } at the given time τ , {\displaystyle \tau ,} i.e. γ ( τ ) = q , {\displaystyle \gamma (\tau )=\mathbf {q} ,} then δ γ ( τ ) = 0. {\displaystyle \delta \gamma (\tau )=0.}
Let M {\displaystyle M} be the configuration space of the mechanical system, t 0 , t 1 ∈ R {\displaystyle t_{0},t_{1}\in \mathbb {R} } be time instants, q 0 , q 1 ∈ M , {\displaystyle q_{0},q_{1}\in M,} C ∞ [ t 0 , t 1 ] {\displaystyle C^{\infty }[t_{0},t_{1}]} consists of smooth functions on [ t 0 , t 1 ] {\displaystyle [t_{0},t_{1}]} , and
P ( M ) = { γ ∈ C ∞ ( [ t 0 , t 1 ] , M ) ∣ γ ( t 0 ) = q 0 , γ ( t 1 ) = q 1 } . {\displaystyle P(M)=\{\gamma \in C^{\infty }([t_{0},t_{1}],M)\mid \gamma (t_{0})=q_{0},\ \gamma (t_{1})=q_{1}\}.}
The constraints γ ( t 0 ) = q 0 , {\displaystyle \gamma (t_{0})=q_{0},} γ ( t 1 ) = q 1 {\displaystyle \gamma (t_{1})=q_{1}} are here for illustration only. In practice, for each individual system, an individual set of constraints is required.
For each path γ ∈ P ( M ) {\displaystyle \gamma \in P(M)} and ϵ 0 > 0 , {\displaystyle \epsilon _{0}>0,} a variation of γ {\displaystyle \gamma } is a function Γ : [ t 0 , t 1 ] × [ − ϵ 0 , ϵ 0 ] → M {\displaystyle \Gamma :[t_{0},t_{1}]\times [-\epsilon _{0},\epsilon _{0}]\to M} such that, for every ϵ ∈ [ − ϵ 0 , ϵ 0 ] , {\displaystyle \epsilon \in [-\epsilon _{0},\epsilon _{0}],} Γ ( ⋅ , ϵ ) ∈ P ( M ) {\displaystyle \Gamma (\cdot ,\epsilon )\in P(M)} and Γ ( t , 0 ) = γ ( t ) . {\displaystyle \Gamma (t,0)=\gamma (t).} The virtual displacement δ γ : [ t 0 , t 1 ] → T M {\displaystyle \delta \gamma :[t_{0},t_{1}]\to TM} ( T M {\displaystyle (TM} being the tangent bundle of M ) {\displaystyle M)} corresponding to the variation Γ {\displaystyle \Gamma } assigns [ 1 ] to every t ∈ [ t 0 , t 1 ] {\displaystyle t\in [t_{0},t_{1}]} the tangent vector
δ γ ( t ) = d Γ ( t , ϵ ) d ϵ | ϵ = 0 ∈ T γ ( t ) M . {\displaystyle \delta \gamma (t)=\left.{\frac {d\Gamma (t,\epsilon )}{d\epsilon }}\right|_{\epsilon =0}\in T_{\gamma (t)}M.}
In terms of the tangent map ,
δ γ ( t ) = Γ ∗ t ( d d ϵ | ϵ = 0 ) . {\displaystyle \delta \gamma (t)=\Gamma _{*}^{t}\left(\left.{\frac {d}{d\epsilon }}\right|_{\epsilon =0}\right).}
Here Γ ∗ t : T 0 [ − ϵ , ϵ ] → T Γ ( t , 0 ) M = T γ ( t ) M {\displaystyle \Gamma _{*}^{t}:T_{0}[-\epsilon ,\epsilon ]\to T_{\Gamma (t,0)}M=T_{\gamma (t)}M} is the tangent map of Γ t : [ − ϵ , ϵ ] → M , {\displaystyle \Gamma ^{t}:[-\epsilon ,\epsilon ]\to M,} where Γ t ( ϵ ) = Γ ( t , ϵ ) , {\displaystyle \Gamma ^{t}(\epsilon )=\Gamma (t,\epsilon ),} and d d ϵ | ϵ = 0 ∈ T 0 [ − ϵ , ϵ ] . {\displaystyle \textstyle {\frac {d}{d\epsilon }}{\Bigl |}_{\epsilon =0}\in T_{0}[-\epsilon ,\epsilon ].}
A single particle freely moving in R 3 {\displaystyle \mathbb {R} ^{3}} has 3 degrees of freedom. The configuration space is M = R 3 , {\displaystyle M=\mathbb {R} ^{3},} and P ( M ) = C ∞ ( [ t 0 , t 1 ] , M ) . {\displaystyle P(M)=C^{\infty }([t_{0},t_{1}],M).} For every path γ ∈ P ( M ) {\displaystyle \gamma \in P(M)} and a variation Γ ( t , ϵ ) {\displaystyle \Gamma (t,\epsilon )} of γ , {\displaystyle \gamma ,} there exists a unique σ ∈ T 0 R 3 {\displaystyle \sigma \in T_{0}\mathbb {R} ^{3}} such that Γ ( t , ϵ ) = γ ( t ) + σ ( t ) ϵ + o ( ϵ ) , {\displaystyle \Gamma (t,\epsilon )=\gamma (t)+\sigma (t)\epsilon +o(\epsilon ),} as ϵ → 0. {\displaystyle \epsilon \to 0.} By the definition,
δ γ ( t ) = ( d d ϵ ( γ ( t ) + σ ( t ) ϵ + o ( ϵ ) ) ) | ϵ = 0 {\displaystyle \delta \gamma (t)=\left.\left({\frac {d}{d\epsilon }}{\Bigl (}\gamma (t)+\sigma (t)\epsilon +o(\epsilon ){\Bigr )}\right)\right|_{\epsilon =0}}
which leads to
δ γ ( t ) = σ ( t ) ∈ T γ ( t ) R 3 . {\displaystyle \delta \gamma (t)=\sigma (t)\in T_{\gamma (t)}\mathbb {R} ^{3}.}
N {\displaystyle N} particles moving freely on a two-dimensional surface S ⊂ R 3 {\displaystyle S\subset \mathbb {R} ^{3}} have 2 N {\displaystyle 2N} degree of freedom. The configuration space here is
M = { ( r 1 , … , r N ) ∈ R 3 N ∣ r i ∈ R 3 ; r i ≠ r j if i ≠ j } , {\displaystyle M=\{(\mathbf {r} _{1},\ldots ,\mathbf {r} _{N})\in \mathbb {R} ^{3\,N}\mid \mathbf {r} _{i}\in \mathbb {R} ^{3};\ \mathbf {r} _{i}\neq \mathbf {r} _{j}\ {\text{if}}\ i\neq j\},}
where r i ∈ R 3 {\displaystyle \mathbf {r} _{i}\in \mathbb {R} ^{3}} is the radius vector of the i th {\displaystyle i^{\text{th}}} particle. It follows that
T ( r 1 , … , r N ) M = T r 1 S ⊕ … ⊕ T r N S , {\displaystyle T_{(\mathbf {r} _{1},\ldots ,\mathbf {r} _{N})}M=T_{\mathbf {r} _{1}}S\oplus \ldots \oplus T_{\mathbf {r} _{N}}S,}
and every path γ ∈ P ( M ) {\displaystyle \gamma \in P(M)} may be described using the radius vectors r i {\displaystyle \mathbf {r} _{i}} of each individual particle, i.e.
γ ( t ) = ( r 1 ( t ) , … , r N ( t ) ) . {\displaystyle \gamma (t)=(\mathbf {r} _{1}(t),\ldots ,\mathbf {r} _{N}(t)).}
This implies that, for every δ γ ( t ) ∈ T ( r 1 ( t ) , … , r N ( t ) ) M , {\displaystyle \delta \gamma (t)\in T_{(\mathbf {r} _{1}(t),\ldots ,\mathbf {r} _{N}(t))}M,}
δ γ ( t ) = δ r 1 ( t ) ⊕ … ⊕ δ r N ( t ) , {\displaystyle \delta \gamma (t)=\delta \mathbf {r} _{1}(t)\oplus \ldots \oplus \delta \mathbf {r} _{N}(t),}
where δ r i ( t ) ∈ T r i ( t ) S . {\displaystyle \delta \mathbf {r} _{i}(t)\in T_{\mathbf {r} _{i}(t)}S.} Some authors express this as
δ γ = ( δ r 1 , … , δ r N ) . {\displaystyle \delta \gamma =(\delta \mathbf {r} _{1},\ldots ,\delta \mathbf {r} _{N}).}
A rigid body rotating around a fixed point with no additional constraints has 3 degrees of freedom. The configuration space here is M = S O ( 3 ) , {\displaystyle M=SO(3),} the special orthogonal group of dimension 3 (otherwise known as 3D rotation group ), and P ( M ) = C ∞ ( [ t 0 , t 1 ] , M ) . {\displaystyle P(M)=C^{\infty }([t_{0},t_{1}],M).} We use the standard notation s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} to refer to the three-dimensional linear space of all skew-symmetric three-dimensional matrices. The exponential map exp : s o ( 3 ) → S O ( 3 ) {\displaystyle \exp :{\mathfrak {so}}(3)\to SO(3)} guarantees the existence of ϵ 0 > 0 {\displaystyle \epsilon _{0}>0} such that, for every path γ ∈ P ( M ) , {\displaystyle \gamma \in P(M),} its variation Γ ( t , ϵ ) , {\displaystyle \Gamma (t,\epsilon ),} and t ∈ [ t 0 , t 1 ] , {\displaystyle t\in [t_{0},t_{1}],} there is a unique path Θ t ∈ C ∞ ( [ − ϵ 0 , ϵ 0 ] , s o ( 3 ) ) {\displaystyle \Theta ^{t}\in C^{\infty }([-\epsilon _{0},\epsilon _{0}],{\mathfrak {so}}(3))} such that Θ t ( 0 ) = 0 {\displaystyle \Theta ^{t}(0)=0} and, for every ϵ ∈ [ − ϵ 0 , ϵ 0 ] , {\displaystyle \epsilon \in [-\epsilon _{0},\epsilon _{0}],} Γ ( t , ϵ ) = γ ( t ) exp ( Θ t ( ϵ ) ) . {\displaystyle \Gamma (t,\epsilon )=\gamma (t)\exp(\Theta ^{t}(\epsilon )).} By the definition,
δ γ ( t ) = ( d d ϵ ( γ ( t ) exp ( Θ t ( ϵ ) ) ) ) | ϵ = 0 = γ ( t ) d Θ t ( ϵ ) d ϵ | ϵ = 0 . {\displaystyle \delta \gamma (t)=\left.\left({\frac {d}{d\epsilon }}{\Bigl (}\gamma (t)\exp(\Theta ^{t}(\epsilon )){\Bigr )}\right)\right|_{\epsilon =0}=\gamma (t)\left.{\frac {d\Theta ^{t}(\epsilon )}{d\epsilon }}\right|_{\epsilon =0}.}
Since, for some function σ : [ t 0 , t 1 ] → s o ( 3 ) , {\displaystyle \sigma :[t_{0},t_{1}]\to {\mathfrak {so}}(3),} Θ t ( ϵ ) = ϵ σ ( t ) + o ( ϵ ) {\displaystyle \Theta ^{t}(\epsilon )=\epsilon \sigma (t)+o(\epsilon )} , as ϵ → 0 {\displaystyle \epsilon \to 0} ,
δ γ ( t ) = γ ( t ) σ ( t ) ∈ T γ ( t ) S O ( 3 ) . {\displaystyle \delta \gamma (t)=\gamma (t)\sigma (t)\in T_{\gamma (t)}\mathrm {SO} (3).}
|
https://en.wikipedia.org/wiki/Virtual_displacement
|
Virtual engineering ( VE ) is defined as integrating geometric models and related engineering tools such as analysis, simulation , optimization , and decision making tools, etc., within a computer-generated environment that facilitates multidisciplinary collaborative product development. Virtual engineering shares many characteristics with software engineering , such as the ability to obtain many different results through different implementations.
A virtual engineering environment provides a user-centered, first-person perspective that enables users to interact with an engineered system naturally and provides users with a wide range of accessible tools. This requires an engineering model that includes the geometry, physics, and any quantitative or qualitative data from the real system. The user should be able to walk through the operating system and observe how it works and how it responds to changes in design, operation, or any other engineering modification. Interaction within the virtual environment should provide an easily understood interface, appropriate to the user's technical background and expertise, that enables the user to explore and discover unexpected but critical details about the system's behavior. Similarly, engineering tools and software should fit naturally into the environment and allow the user to maintain her or his focus on the engineering problem at hand. A key aim of virtual engineering is to engage the human capacity for complex evaluation.
The key components of such an environment include:
Virtual engineering allows engineers to work with objects in a virtual space without having to think about the objects' underlying technical information. When an engineer takes hold of a virtual component and moves or alters it, he or she should only have to think about the consequences of such a move in the component's real world counterpart. Engineers must also be able to create a picture of the system, the various parts of the system, and how the parts will interact with each other. When engineers can focus on the making decisions for particular engineering issues rather than the underlying technical information, design cycles and costs are reduced.
Usually, the modules of virtual engineering are named as such:
Other modules can exist performing various other tasks, such as prototype manufacturing and product life cycle management.
|
https://en.wikipedia.org/wiki/Virtual_engineering
|
Virtual instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems.
Traditional hardware instrumentation systems are made up of fixed hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog-to-digital converter can act as a hardware complement of a virtual oscilloscope , a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation.
The concept of a synthetic instrument is a subset of the virtual instrumentation concept. A synthetic instrument is a kind of virtual instrumentation that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instrumentation can still have measurement-specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instrumentation is by definition not specific to the measurement, nor is it necessarily (or usually) modular.
Leveraging commercially available technologies, such as the PC and the analog-to-digital converter , virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments ' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems.
The newly updated technology called "hard virtual instrumentation" is developed by some companies. It is said that with this technology the execution of the software is done by the hardware itself which can help in fast real time processing. [ citation needed ]
This article related to a type of software is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virtual_instrumentation
|
In genetics , virtual karyotype is the digital information reflecting a karyotype , resulting from the analysis of short sequences of DNA from specific loci all over the genome , which are isolated and enumerated. [ 1 ] It detects genomic copy number variations at a higher resolution for level than conventional karyotyping or chromosome-based comparative genomic hybridization (CGH). [ 2 ] The main methods used for creating virtual karyotypes are array-comparative genomic hybridization and SNP arrays .
A karyotype (Fig 1) is the characteristic chromosome complement of a eukaryote species . [ 3 ] [ 4 ] A karyotype is typically presented as an image of the chromosomes from a single cell arranged from largest (chromosome 1) to smallest (chromosome 22), with the sex chromosomes (X and Y) shown last. Historically, karyotypes have been obtained by staining cells after they have been chemically arrested during cell division. Karyotypes have been used for several decades to identify chromosomal abnormalities in both germline and cancer cells. Conventional karyotypes can assess the entire genome for changes in chromosome structure and number, but the resolution is relatively coarse, with a detection limit of 5-10Mb. [ citation needed ]
Recently, platforms for generating high-resolution karyotypes in silico from disrupted DNA have emerged, such as array comparative genomic hybridization (arrayCGH) and SNP arrays . Conceptually, the arrays are composed of hundreds to millions of probes which are complementary to a region of interest in the genome. The disrupted DNA from the test sample is fragmented, labeled, and hybridized to the array. The hybridization signal intensities for each probe are used by specialized software to generate a log2ratio of test/normal for each probe on the array. [ citation needed ]
Knowing the address of each probe on the array and the address of each probe in the genome, the software lines up the probes in chromosomal order and reconstructs the genome in silico (Fig 2 and 3).
Virtual karyotypes have dramatically higher resolution than conventional cytogenetics. The actual resolution will depend on the density of probes on the array. Currently, the Affymetrix SNP6.0 is the highest density commercially available array for virtual karyotyping applications. It contains 1.8 million polymorphic and non-polymorphic markers for a practical resolution of 10-20kb—about the size of a gene. This is approximately 1000-fold greater resolution than karyotypes obtained from conventional cytogenetics. [ citation needed ]
Virtual karyotypes can be performed on germline samples for constitutional disorders, [ 5 ] [ 6 ] and clinical testing is available from dozens of CLIA certified laboratories ( genetests.org ). Virtual karyotyping can also be done on fresh or formalin-fixed paraffin-embedded tumors. [ 7 ] [ 8 ] [ 9 ] CLIA-certified laboratories offering testing on tumors include Creighton Medical Laboratories (fresh and paraffin embedded tumor samples) and CombiMatrix Molecular Diagnostics (fresh tumor samples).
Array-based karyotyping can be done with several different platforms, both laboratory-developed and commercial. The arrays themselves can be genome-wide (probes distributed over the entire genome) or targeted (probes for genomic regions known to be involved in a specific disease) or a combination of both. Further, arrays used for karyotyping may use non-polymorphic probes, polymorphic probes (i.e., SNP-containing), or a combination of both. Non-polymorphic probes can provide only copy number information, while SNP arrays can provide both copy number and loss-of-heterozygosity (LOH) status in one assay. The probe types used for non-polymorphic arrays include cDNA, BAC clones (e.g., BlueGnome ), and oligonucleotides (e.g., Agilent , Santa Clara, CA, USA or Nimblegen , Madison, WI, USA). Commercially available oligonucleotide SNP arrays can be solid phase ( Affymetrix , Santa Clara, CA, USA) or bead-based ( Illumina , San Diego, CA, USA). Despite the diversity of platforms, ultimately they all use genomic DNA from disrupted cells to recreate a high resolution karyotype in silico . The end product does not yet have a consistent name, and has been called virtual karyotyping, [ 8 ] [ 10 ] digital karyotyping, [ 11 ] molecular allelokaryotyping, [ 12 ] and molecular karyotyping. [ 13 ] Other terms used to describe the arrays used for karyotyping include SOMA (SNP oligonucleotide microarrays) [ 14 ] and CMA (chromosome microarray). [ 15 ] [ 16 ] Some consider all platforms to be a type of array comparative genomic hybridization (arrayCGH), while others reserve that term for two-dye methods, and still others segregate SNP arrays because they generate more and different information than two-dye arrayCGH methods. [ citation needed ]
Copy number changes can be seen in both germline and tumor samples. Copy number changes can be detected by arrays with non-polymorphic probes, such as arrayCGH, and by SNP-based arrays. Human beings are diploid, so a normal copy number is always two for the non-sex chromosomes. [ citation needed ]
Autozygous segments and uniparental disomy (UPD) are diploid/'copy neutral' genetic findings and therefore are only detectable by SNP-based arrays. Both autozygous segments and UPD will show loss of heterozygosity (LOH) with a copy number of two by SNP array karyotyping. The term Runs of Homozgygosity (ROH), is a generic term that can be used for either autozygous segments or UPD. [ citation needed ]
Figure 7 is a SNP array virtual karyotype from a colorectal carcinoma demonstrating deletions, gains, amplifications, and acquired UPD (copy neutral LOH).
A virtual karyotype can be generated from nearly any tumor, but the clinical meaning of the genomic aberrations identified are different for each tumor type. Clinical utility varies and appropriateness is best determined by an oncologist or pathologist in consultation with the laboratory director of the lab performing the virtual karyotype. Below are examples of types of cancers where the clinical implications of specific genomic aberrations are well established. This list is representative, not exhaustive. The web site for the Cytogenetics Laboratory at Wisconsin State Laboratory of Hygiene has additional examples of clinically relevant genetic changes that are readily detectable by virtual karyotyping. [1]
Based on a series of 493 neuroblastoma samples, it has been reported that overall genomic pattern, as tested by array-based karyotyping, is a predictor of outcome in neuroblastoma: [ 24 ]
Earlier publications categorized neuroblastomas into three major subtypes based on cytogenetic profiles: [ 25 ]
Tumor-specific loss-of-heterozygosity (LOH) for chromosomes 1p and 16q identifies a subset of Wilms' tumor patients who have a significantly increased risk of relapse and death. LOH for these chromosomal regions can now be used as an independent prognostic factor together with disease stage to target intensity of treatment to risk of treatment failure. [ 26 ] [ 27 ]
Renal epithelial neoplasms have characteristic cytogenetic aberrations that can aid in classification. [ 28 ] See also Atlas of Genetics and Cytogenetics in Oncology and Haematology .
Array-based karyotyping can be used to identify characteristic chromosomal aberrations in renal tumors with challenging morphology. [ 8 ] [ 10 ] Array-based karyotyping performs well on paraffin embedded tumors [ 29 ] and is amenable to routine clinical use.
In addition, recent literature indicates that certain chromosomal aberrations are associated with outcome in specific subtypes of renal epithelial tumors. [ 30 ] Clear cell renal carcinoma: del 9p and del 14q are poor prognostic indicators. [ 31 ] [ 32 ] Papillary renal cell carcinoma: duplication of 1q marks fatal progression. [ 33 ]
Array-based karyotyping is a cost-effective alternative to FISH for detecting chromosomal abnormalities in chronic lymphocytic leukemia (CLL). Several clinical validation studies have shown >95% concordance with the standard CLL FISH panel. [ 12 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] In addition, many studies using array-based karyotyping have identified 'atypical deletions' missed by the standard FISH probes and acquired uniparental disomy at key loci for prognostic risk in CLL. [ 38 ] [ 39 ]
Four main genetic aberrations are recognized in CLL cells that have a major impact on disease behavior. [ 40 ]
Avet-Loiseau, et al. in Journal of Clinical Oncology , used SNP array karyotyping of 192 multiple myeloma (MM) samples to identify genetic lesions associated with prognosis, which were then validated in a separate cohort (n = 273). [ 41 ] In MM, lack of a proliferative clone makes conventional cytogenetics informative in only ~30% of cases. FISH panels are useful in MM, but standard panels would not detect several key genetic abnormalities reported in this study. [ citation needed ]
Array-based karyotyping cannot detect balanced translocations, such as t(4;14) seen in ~15% of MM. Therefore, FISH for this translocation should also be performed if using SNP arrays to detect genome-wide copy number alterations of prognostic significance in MM. [ citation needed ]
Array-based karyotyping of 260 medulloblastomas by Pfister S, et al. resulted in the following clinical subgroups based on cytogenetic profiles: [ 42 ]
The 1p/19q co-deletion is considered a "genetic signature" of oligodendroglioma . Allelic losses on 1p and 19q, either separately or combined, are more common in classic oligodendrogliomas than in either astrocytomas or oligoastrocytomas. [ 43 ] In one study, classic oligodendrogliomas showed 1p loss in 35 of 42 (83%) cases, 19q loss in 28 of 39 (72%), and these were combined in 27 of 39 (69%) cases; there was no significant difference in 1p/19q loss of heterozygosity status between low-grade and anaplastic oligodendrogliomas. [ 43 ] 1p/19q co-deletion has been correlated with both chemosensitivity and improved prognosis in oligodendrogliomas. [ 44 ] [ 45 ] Most larger cancer treatment centers routinely check for the deletion of 1p/19q as part of the pathology report for oligodendrogliomas. The status of the 1p/19q loci can be detected by FISH or virtual karyotyping. Virtual karyotyping has the advantage of assessing the entire genome in one assay, as well as the 1p/19q loci. This allows assessment of other key loci in glial tumors, such as EGFR and TP53 copy number status. [ citation needed ]
Whereas the prognostic relevance of 1p and 19q deletions is well established for anaplastic oligodendrogliomas and mixed oligoastrocytomas, the prognostic relevance of the deletions for low-grade gliomas is more controversial. In terms of low-grade gliomas, a recent study also suggests that 1p/19q co-deletion may be associated with a (1;19)(q10;p10) translocation which, like the combined 1p/19q deletion, is associated with superior overall survival and progression-free survival in low-grade glioma patients. [ 46 ] Oligodendrogliomas show only rarely mutations in the p53 gene, which is in contrast to other gliomas. [ 47 ] Epidermal growth factor receptor amplification and whole 1p/19q codeletion are mutually exclusive and predictive of completely different outcomes, with EGFR amplification predicting poor prognosis. [ 48 ]
Yin et al. [ 49 ] studied 55 glioblastoma and 6 GBM cell lines using SNP array karyotyping. Acquired UPD was identified at 17p in 13/61 cases. A significantly shortened survival time was found in patients with 13q14 (RB) deletion or 17p13.1 (p53) deletion/acquired UPD. Taken together, these results suggest that this technique is a rapid, robust, and inexpensive method to profile genome-wide abnormalities in GBM. Because SNP array karyotyping can be performed on paraffin embedded tumors, it is an attractive option when tumor cells fail to grow in culture for metaphase cytogenetics or when the desire for karyotyping arises after the specimen has been formalin fixed. [ citation needed ]
The importance of detecting acquired UPD (copy neutral LOH) in glioblastoma: [ citation needed ]
In addition, in cases with uncertain grade by morphology, genomic profiling can assist in diagnosis.
Cytogenetics , the study of characteristic large changes in the chromosomes of cancer cells , has been increasingly recognized as an important predictor of outcome in acute lymphoblastic leukemia (ALL). [ 52 ] NB: Balanced translocations cannot be detected by array-based karyotyping (see Limitations below).
Some cytogenetic subtypes have a worse prognosis than others. These include:
Correlation of prognosis with bone marrow cytogenetic finding in acute lymphoblastic leukemia
Unclassified ALL is considered to have an intermediate prognosis. [ 56 ]
Myelodysplastic syndrome (MDS) has remarkable clinical, morphological, and genetic heterogeneity. Cytogenetics play a decisive role in the World Health Organization's classification-based International Prognostic Scoring System (IPSS) for MDS. [ 57 ] [ 58 ]
In a comparison of metaphase cytogenetics, FISH panel, and SNP array karyotyping for MDS, it was found that each technique provided a similar diagnostic yield. No single method detected all defects, and detection rates improved by ~5% when all three methods were used. [ 59 ]
Acquired UPD, which is not detectable by FISH or cytogenetics, has been reported at several key loci in MDS using SNP array karyotyping, including deletion of 7/7q. [ 60 ] [ 61 ]
Philadelphia chromosome–negative myeloproliferative neoplasms (MPNs) including polycythemia vera, essential thrombocythemia, and primary myelofibrosis show an inherent tendency for transformation into leukemia (MPN-blast phase), which is accompanied by acquisition of additional genomic lesions.
In a study of 159 cases, [ 62 ] SNP-array analysis was able to capture practically all cytogenetic abnormalities and to uncover additional lesions with potentially important clinical implications. [ citation needed ]
Identification of biomarkers in colorectal cancer is particularly important for patients with stage II disease, where less than 20% have tumor recurrence. 18q LOH is an established biomarker associated with high risk of tumor recurrence in stage II colon cancer. [ 63 ] Figure 7 shows a SNP array karyotype of a colorectal carcinoma (whole genome view).
Colorectal cancers are classified into specific tumor phenotypes based on molecular profiles [ 63 ] which can be integrated with the results of other ancillary tests, such as microsatellite instability testing, IHC, and KRAS mutation status:
Malignant rhabdoid tumors are rare, highly aggressive neoplasms found most commonly in infants and young children. Due to their heterogenous histologic features, diagnosis can often be difficult and misclassifications can occur. In these tumors, the INI1 gene (SMARCB1)on chromosome 22q functions as a classic tumor suppressor gene. Inactivation of INI1 can occur via deletion, mutation, or acquired UPD. [ 64 ]
In a recent study, [ 64 ] SNP array karyotyping identified deletions or LOH of 22q in 49/51 rhabdoid tumors. Of these, 14 were copy neutral LOH (or acquired UPD), which is detectable by SNP array karyotyping, but not by FISH, cytogenetics, or arrayCGH. MLPA detected a single exon homozygous deletion in one sample that was below the resolution of the SNP array. [ citation needed ]
SNP array karyotyping can be used to distinguish, for example, a medulloblastoma with an isochromosome 17q from a primary rhabdoid tumor with loss of 22q11.2. When indicated, molecular analysis of INI1 using MLPA and direct sequencing may then be employed. Once the tumor-associated changes are found, an analysis of germline DNA from the patient and the parents can be done to rule out an inherited or de novo germline mutation or deletion of INI1, so that appropriate recurrence risk assessments can be made. [ 64 ]
The most important genetic alteration associated with poor prognosis in uveal melanoma is loss of an entire copy of Chromosome 3 ( Monosomy 3), which is strongly correlated with metastatic spread. [ 65 ] Gains on chromosomes 6 and 8 are often used to refine the predictive value of the Monosomy 3 screen, with gain of 6p indicating a better prognosis and gain of 8q indicating a worse prognosis in disomy 3 tumors. [ 66 ] In rare instances, monosomy 3 tumors may duplicate the remaining copy of the chromosome to return to a disomic state referred to as isodisomy . [ 67 ] Isodisomy 3 is prognostically equivalent to monosomy 3, and both can be detected by tests for chromosome 3 loss of heterozygosity . [ 68 ]
Unlike karyotypes obtained from conventional cytogenetics, virtual karyotypes are reconstructed by computer programs using signals obtained from disrupted DNA. In essence, the computer program will correct translocations when it lines up the signals in chromosomal order. Therefore, virtual karyotypes cannot detect balanced translocations and inversions . They also can only detect genetic aberrations in regions of the genome that are represented by probes on the array. In addition, virtual karyotypes generate a relative copy number normalized against a diploid genome, so tetraploid genomes will be condensed into a diploid space unless renormalization is performed. Renormalization requires an ancillary cell-based assay, such as FISH, if one is using arrayCGH. For karyotypes obtained from SNP-based arrays, tetraploidy can often be inferred from the maintenance of heterozygosity within a region of apparent copy number loss. [ 22 ] Low-level mosaicism or small subclones may not be detected by virtual karyotypes because the presence of normal cells in the sample will dampen the signal from the abnormal clone. The exact point of failure, in terms of the minimal percentage of neoplastic cells, will depend on the particular platform and algorithms used. Many copy number analysis software programs used to generate array-based karyotypes will falter with less than 25–30% tumor/abnormal cells in the sample. However, in oncology applications this limitation can be minimized by tumor enrichment strategies and software optimized for use with oncology samples. The analysis algorithms are evolving rapidly, and some are even designed to thrive on 'normal clone contamination', [ 69 ] so it is anticipated that this limitation will continue to dissipate.
|
https://en.wikipedia.org/wiki/Virtual_karyotype
|
In knot theory , a virtual knot is a generalization of knots in 3-dimensional Euclidean space , R 3 , to knots in thickened surfaces Σ × [ 0 , 1 ] {\displaystyle \Sigma \times [0,1]} modulo an equivalence relation called stabilization/destabilization. Here Σ {\displaystyle \Sigma } is required to be closed and oriented. Virtual knots were first introduced by Kauffman (1999) .
In the theory of classical knots, knots can be considered equivalence classes of knot diagrams under the Reidemeister moves . Likewise a virtual knot can be considered an equivalence of virtual knot diagrams that are equivalent under generalized Reidemeister moves. Virtual knots allow for the existence of, for example, knots whose Gauss codes which could not exist in 3-dimensional Euclidean space . A virtual knot diagram is a 4-valent planar graph, but each vertex is now allowed to be a classical crossing or a new type called virtual. The generalized moves show how to manipulate such diagrams to obtain an equivalent diagram; one move called the semi-virtual move involves both classical and virtual crossings, but all the other moves involve only one variety of crossing.
A classical knot can also be considered an equivalence class of Gauss diagrams under certain moves coming from the Reidemeister moves. Not all Gauss diagrams are realizable as knot diagrams, but by considering all equivalence classes of Gauss diagrams we obtain virtual knots.
A classical knot can be considered an ambient isotopy class of embeddings of the circle into a thickened 2-sphere. This can be generalized by considering such classes of embeddings into thickened higher-genus surfaces. This is not quite what we want since adding a handle to a (thick) surface will create a higher-genus embedding of the original knot. The adding of a handle is called stabilization and the reverse process destabilization. Thus a virtual knot can be considered an ambient isotopy class of embeddings of the circle into thickened surfaces with the equivalence given by (de)stabilization.
Some basic theorems relating classical and virtual knots:
There is a relation among the following.
|
https://en.wikipedia.org/wiki/Virtual_knot
|
In computer security , virtual machine escape ( VM escape ) is the process of a program breaking out of the virtual machine (VM) on which it is running and interacting with the host operating system . [ 1 ] In theory, a virtual machine is a "completely isolated guest operating system installation within a normal host operating system", [ 2 ] but this isn't always the case in practice.
For example, in 2008, a vulnerability ( CVE - 2008-0923 ) in VMware discovered by Core Security Technologies made VM escape possible on VMware Workstation 6.0.2 and 5.5.4. [ 3 ] [ 4 ] A fully working exploit labeled Cloudburst was developed by Immunity Inc. for Immunity CANVAS (a commercial penetration testing tool). [ 5 ] Cloudburst was presented at Black Hat USA 2009. [ 6 ]
|
https://en.wikipedia.org/wiki/Virtual_machine_escape
|
Virtual microscopy is a method of posting microscope images on, and transmitting them over, computer networks. This allows independent viewing of images by large numbers of people in diverse locations. It involves a synthesis of microscopy technologies and digital technologies. [ 1 ] The use of virtual microscopes can transform traditional teaching methods by removing the reliance on physical space, equipment, and specimens to a model that is solely dependent upon computer-internet access. This increases the convenience of accessing the slide sets and making the slides available to a broader audience. Digitized slides can have a high resolution and are resistant to being damaged or broken over time. [ 2 ]
Prior to recent advances in virtual microscopy, slides were commonly digitized by various forms of film scanner and image resolutions rarely exceeded 5000 dpi. Nowadays, it is possible to achieve more than 100,000 dpi and thus resolutions approaching that visible under the optical microscope . This increase in scanning resolution comes at a price; whereas a typical flatbed or film scanner ranges in cost from $200 to $600, a 100,000 dpi slide scanner will range from $80,000 to $200,000. [ 3 ]
|
https://en.wikipedia.org/wiki/Virtual_microscopy
|
A Virtual Mixer is a software application that runs on a computer or other digital audio system. Providing the same functionality of a digital or analog mixing console , a virtual mixer takes the audio outputs of many separate tracks or live sources and combines them into a pair of stereo outputs or other routed subgroups for auxiliary outputs.
Around the mid 1990s, computers achieved a level of processing power that allowed for professional recordings to be done digitally. In the following decade, many artists began recording their own music in home studios with the aid of DAW ( digital audio workstation ) software like GarageBand or ProTools. It was this move away from high end studios and the rise of computing power in personal computers that gave rise to virtual mixers that required minimal to no physical interface.
The design of most virtual mixers is modeled after physical mixers. The individual channel strips are arranged side-by-side and the user is given control over level and pan. There is also a single master fader for the stereo output. The actual controls are also modeled after physical mixers, featuring faders and knobs that can be controlled using a mouse and keyboard shortcuts.
Each channel displays a decibel meter and slots for optional third-party plugins. These plugins range from built-in effects to EQ, compression , and gates. These plugins can be implemented a number of ways. Each channel allows for plugins to be added via dropdown menus from a number of slots. Through this method, plugins are applied to individual channels. Alternatively, plugins can be applied to a number of channels by busing the desired channels to another track. In this case, the effectiveness of the effect can be controlled through the fader of the bused channel.
This music software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virtual_mixer
|
A virtual network interface (VNI) is an abstract virtualized representation of a computer network interface that may or may not correspond directly to a network interface controller .
It is common for the operating system kernel to maintain a table of virtual network interfaces in memory. This may allow the system to store and operate on such information independently of the physical interface involved (or even whether it is a direct physical interface or for instance a tunnel or a bridged interface). It may also allow processes on the system to interact concerning network connections in a more granular fashion than simply assuming a single amorphous Internet (of unknown capacity or performance). [ citation needed ]
W. Richard Stevens , in volume 2 of his treatise entitled TCP/IP Illustrated , refers to the kernel's Virtual Interface Table in his discussion of multicast routing. For example, a multicast router may operate differently on interfaces that represent tunnels than on physical interfaces (e.g. it may only need to collect membership information for physical interfaces). Thus the virtual interface may need to divulge some specifics to the user, such as whether or not it represents a physical interface directly. [ 1 ]
In addition to allowing user space applications to refer to abstract network interface connections, in some systems, a virtual interface framework may allow processes to better coordinate the sharing of a given physical interface
(beyond the default operating system behavior) by hierarchically subdividing it into abstract interfaces with specified bandwidth limits and queueing models. This can imply restriction of the process, e.g. by inheriting a limited branch of such a hierarchy from which it may not stray. [ citation needed ]
This extra layer of network abstraction is often unnecessary and may have a minor performance penalty. However, it is also possible to use such a layer of abstraction to work around a performance bottleneck, indeed even to bypass the kernel for optimization purposes. [ 2 ]
The term VIF has also been applied when the application virtualizes or abstracts network interfaces. Since most software need not concern itself with the particulars of network interfaces, and since the desired abstraction may already be available through the operating system, its usage is rare.
|
https://en.wikipedia.org/wiki/Virtual_network_interface
|
A virtual particle is a theoretical transient particle that exhibits some of the characteristics of an ordinary particle, while having its existence limited by the uncertainty principle , which allows the virtual particles to spontaneously emerge from vacuum at short time and space ranges. [ 1 ] The concept of virtual particles arises in the perturbation theory of quantum field theory (QFT) where interactions between ordinary particles are described in terms of exchanges of virtual particles. A process involving virtual particles can be described by a schematic representation known as a Feynman diagram , in which virtual particles are represented by internal lines. [ 2 ] [ 3 ]
Virtual particles do not necessarily carry the same mass as the corresponding ordinary particle, although they always conserve energy and momentum . The closer its characteristics come to those of ordinary particles, the longer the virtual particle exists. They are important in the physics of many processes, including particle scattering and Casimir forces . In quantum field theory, forces—such as the electromagnetic repulsion or attraction between two charges—can be thought of as resulting from the exchange of virtual photons between the charges. Virtual photons are the exchange particles for the electromagnetic interaction .
The term is somewhat loose and vaguely defined, [ 4 ] in that it refers to the view that the world is made up of "real particles". "Real particles" are better understood to be excitations of the underlying quantum fields. Virtual particles are also excitations of the underlying fields, but are "temporary" in the sense that they appear in calculations of interactions, but never as asymptotic states or indices to the scattering matrix . The accuracy and use of virtual particles in calculations is firmly established, but as they cannot be detected in experiments, deciding how to precisely describe them is a topic of debate. [ 5 ] Although widely used, they are by no means a necessary feature of QFT, but rather are mathematical conveniences — as demonstrated by lattice field theory , which avoids using the concept altogether.
The concept of virtual particles arises in the perturbation theory of quantum field theory , an approximation scheme in which interactions (in essence, forces) between actual particles are calculated in terms of exchanges of virtual particles. Such calculations are often performed using schematic representations known as Feynman diagrams , in which virtual particles appear as internal lines. By expressing the interaction in terms of the exchange of a virtual particle with four-momentum q , where q is given by the difference between the four-momenta of the particles entering and leaving the interaction vertex, both momentum and energy are conserved at the interaction vertices of the Feynman diagram. [ 6 ] : 119
A virtual particle does not precisely obey the energy–momentum relation m 2 c 4 = E 2 − p 2 c 2 . Its kinetic energy may not have the usual relationship to velocity . It can be negative. [ 7 ] : 110 This is expressed by the phrase off mass shell . [ 6 ] : 119 The probability amplitude for a virtual particle to exist tends to be canceled out by destructive interference over longer distances and times. As a consequence, a real photon is massless and thus has only two polarization states, whereas a virtual one, being effectively massive, has three polarization states.
Quantum tunnelling may be considered a manifestation of virtual particle exchanges. [ 8 ] : 235 The range of forces carried by virtual particles is limited by the uncertainty principle, which regards energy and time as conjugate variables; thus, virtual particles of larger mass have more limited range. [ 9 ]
Written in the usual mathematical notations, in the equations of physics, there is no mark of the distinction between virtual and actual particles. The amplitudes of processes with a virtual particle interfere with the amplitudes of processes without it, whereas for an actual particle the cases of existence and non-existence cease to be coherent with each other and do not interfere any more. In the quantum field theory view, actual particles are viewed as being detectable excitations of underlying quantum fields. Virtual particles are also viewed as excitations of the underlying fields, but appear only as forces, not as detectable particles. They are "temporary" in the sense that they appear in some calculations, but are not detected as single particles. Thus, in mathematical terms, they never appear as indices to the scattering matrix , which is to say, they never appear as the observable inputs and outputs of the physical process being modelled.
There are two principal ways in which the notion of virtual particles appears in modern physics. They appear as intermediate terms in Feynman diagrams ; that is, as terms in a perturbative calculation. They also appear as an infinite set of states to be summed or integrated over in the calculation of a semi-non-perturbative effect. In the latter case, it is sometimes said that virtual particles contribute to a mechanism that mediates the effect, or that the effect occurs through the virtual particles. [ 6 ] : 118
There are many observable physical phenomena that arise in interactions involving virtual particles. For bosonic particles that exhibit rest mass when they are free and actual, virtual interactions are characterized by the relatively short range of the force interaction produced by particle exchange. Confinement can lead to a short range, too. Examples of such short-range interactions are the strong and weak forces, and their associated field bosons.
For the gravitational and electromagnetic forces, the zero rest-mass of the associated boson particle permits long-range forces to be mediated by virtual particles. However, in the case of photons, power and information transfer by virtual particles is a relatively short-range phenomenon (existing only within a few wavelengths of the field-disturbance, which carries information or transferred power), as for example seen in the characteristically short range of inductive and capacitative effects in the near field zone of coils and antennas.
Some field interactions which may be seen in terms of virtual particles are:
Most of these have analogous effects in solid-state physics ; indeed, one can often gain a better intuitive understanding by examining these cases. In semiconductors , the roles of electrons, positrons and photons in field theory are replaced by electrons in the conduction band , holes in the valence band , and phonons or vibrations of the crystal lattice. A virtual particle is in a virtual state where the probability amplitude is not conserved. Examples of macroscopic virtual phonons, photons, and electrons in the case of the tunneling process were presented by Günter Nimtz [ 11 ] and Alfons A. Stahlhofen. [ 12 ]
The calculation of scattering amplitudes in theoretical particle physics requires the use of some rather large and complicated integrals over a large number of variables. These integrals do, however, have a regular structure, and may be represented as Feynman diagrams . The appeal of the Feynman diagrams is strong, as it allows for a simple visual presentation of what would otherwise be a rather arcane and abstract formula. In particular, part of the appeal is that the outgoing legs of a Feynman diagram can be associated with actual, on-shell particles. Thus, it is natural to associate the other lines in the diagram with particles as well, called the "virtual particles". In mathematical terms, they correspond to the propagators appearing in the diagram.
In the adjacent image, the solid lines correspond to actual particles (of momentum p 1 and so on), while the dotted line corresponds to a virtual particle carrying momentum k . For example, if the solid lines were to correspond to electrons interacting by means of the electromagnetic interaction , the dotted line would correspond to the exchange of a virtual photon . In the case of interacting nucleons , the dotted line would be a virtual pion . In the case of quarks interacting by means of the strong force , the dotted line would be a virtual gluon , and so on.
Virtual particles may be mesons or vector bosons , as in the example above; they may also be fermions . However, in order to preserve quantum numbers, most simple diagrams involving fermion exchange are prohibited. The image to the right shows an allowed diagram, a one-loop diagram . The solid lines correspond to a fermion propagator, the wavy lines to bosons.
In formal terms, a particle is considered to be an eigenstate of the particle number operator a † a , where a is the particle annihilation operator and a † the particle creation operator (sometimes collectively called ladder operators ). In many cases, the particle number operator does not commute with the Hamiltonian for the system. This implies the number of particles in an area of space is not a well-defined quantity but, like other quantum observables , is represented by a probability distribution . Since these particles are not certain to exist, they are called virtual particles or vacuum fluctuations of vacuum energy . In a certain sense, they can be understood to be a manifestation of the time-energy uncertainty principle in a vacuum. [ 13 ]
An important example of the "presence" of virtual particles in a vacuum is the Casimir effect . [ 14 ] Here, the explanation of the effect requires that the total energy of all of the virtual particles in a vacuum can be added together. Thus, although the virtual particles themselves are not directly observable in the laboratory, they do leave an observable effect: Their zero-point energy results in forces acting on suitably arranged metal plates or dielectrics . [ 15 ] On the other hand, the Casimir effect can be interpreted as the relativistic van der Waals force . [ 16 ]
Virtual particles are often popularly described as coming in pairs, a particle and antiparticle which can be of any kind. These pairs exist for an extremely short time, and then mutually annihilate, or in some cases, the pair may be boosted apart using external energy so that they avoid annihilation and become actual particles, as described below.
This may occur in one of two ways. In an accelerating frame of reference , the virtual particles may appear to be actual to the accelerating observer; this is known as the Unruh effect . In short, the vacuum of a stationary frame appears, to the accelerated observer, to be a warm gas of actual particles in thermodynamic equilibrium .
Another example is pair production in very strong electric fields, sometimes called vacuum decay . If, for example, a pair of atomic nuclei are merged to very briefly form a nucleus with a charge greater than about 140, (that is, larger than about the inverse of the fine-structure constant , which is a dimensionless quantity ), the strength of the electric field will be such that it will be energetically favorable [ further explanation needed ] to create positron–electron pairs out of the vacuum or Dirac sea , with the electron attracted to the nucleus to annihilate the positive charge. This pair-creation amplitude was first calculated by Julian Schwinger in 1951.
As a consequence of quantum mechanical uncertainty , any object or process that exists for a limited time or in a limited volume cannot have a precisely defined energy or momentum. For this reason, virtual particles – which exist only temporarily as they are exchanged between ordinary particles – do not typically obey the mass-shell relation ; the longer a virtual particle exists, the more the energy and momentum approach the mass-shell relation.
The lifetime of real particles is typically vastly longer than the lifetime of the virtual particles. Electromagnetic radiation consists of real photons which may travel light years between the emitter and absorber, but (Coulombic) electrostatic attraction and repulsion is a relatively short-range [ dubious – discuss ] force that is a consequence of the exchange of virtual photons [ citation needed ] .
|
https://en.wikipedia.org/wiki/Virtual_particle
|
Virtual prototyping is a method in the process of product development . It involves using computer-aided design (CAD), computer-automated design (CAutoD) and computer-aided engineering (CAE) software to validate a design before committing to making a physical prototype . This is done by creating (usually 3D) computer generated geometrical shapes (parts) and either combining them into an "assembly" and testing different mechanical motions, fit and function. The assembly or individual parts can be opened in CAE software as digital twins to simulate the behavior of the product in the real world.
The product design and development process used to rely primarily on engineers' experience and judgment in producing an initial concept design. A physical prototype was then constructed and tested in order to evaluate its performance. Without any way to evaluate its performance in advance, the initial prototype was highly unlikely to meet expectations. Engineers usually had to re-design the initial concept multiple times to address weaknesses that were revealed in physical testing.
Today, manufacturers are under pressure to reduce time to market and optimize products to higher levels of performance and reliability. A much higher number of products are being developed in the form of virtual prototypes in which engineering simulation software is used to predict performance prior to constructing physical prototypes. Engineers can quickly explore the performance of thousands of design alternatives without investing the time and money required to build physical prototypes. The ability to explore a wide range of design alternatives leads to improvements in performance and design quality. Yet the time required to bring the product to market is usually reduced substantially because virtual prototypes can be produced much faster than physical prototypes. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
End-to-end prototyping accounts fully for how a product or a component is manufactured and assembled, and it links the consequences of those processes to performance. Early availability of such physically realistic virtual prototypes allows testing and performance confirmation to take place as design decisions are made; enabling the acceleration of the design activity and providing more insight on the relationship between manufacturing and performance than can be achieved by building and testing physical prototypes. The benefits include reduced costs in both design and manufacturing as physical prototyping and testing is dramatically reduced/eliminated and lean but robust manufacturing processes are selected. [ 5 ]
The research firm Aberdeen Group reports that best-in-class manufacturers, who make extensive use of simulation early in the design process, hit revenue, cost, and launch date and quality targets for 86% or more of their products. [ 6 ] Best-in-class manufacturers of the most complex products get to market 158 days earlier with $1.9 million lower costs than all other manufacturers. Best-in-class manufacturers of the simplest products get to market 21 days earlier with $21,000 fewer product development costs. [ 7 ]
Fisker Automotive used virtual prototyping to design the rear structure and other areas of its Karma plug-in hybrid to ensure the integrity of the fuel tank in a rear end crash as required for Federal Motor Vehicle Safety Standards (FMVSS) 301 certification. [ 8 ] Agilent Technologies used virtual prototyping to design cooling systems for the calibration head for a new high-speed oscilloscope . [ 9 ] Miele used virtual prototyping to improve the development of its washer-disinfector machines by simulating their operational characteristics early in the design cycle . [ 10 ] Several CAE software solutions (for example, Working Model and SimWise) offer the possibility to check the benefits of virtual prototyping even for students and small companies, and collection of case studies are available since 1996. [ 11 ]
|
https://en.wikipedia.org/wiki/Virtual_prototyping
|
Virtual screening ( VS ) is a computational technique used in drug discovery to search libraries of small molecules in order to identify those structures which are most likely to bind to a drug target , typically a protein receptor or enzyme . [ 2 ] [ 3 ]
Virtual screening has been defined as "automatically evaluating very large libraries of compounds" using computer programs. [ 4 ] As this definition suggests, VS has largely been a numbers game focusing on how the enormous chemical space of over 10 60 conceivable compounds [ 5 ] can be filtered to a manageable number that can be synthesized, purchased, and tested. Although searching the entire chemical universe may be a theoretically interesting problem, more practical VS scenarios focus on designing and optimizing targeted combinatorial libraries and enriching libraries of available compounds from in-house compound repositories or vendor offerings. As the accuracy of the method has increased, virtual screening has become an integral part of the drug discovery process. [ 6 ] [ 1 ] Virtual Screening can be used to select in house database compounds for screening, choose compounds that can be purchased externally, and to choose which compound should be synthesized next.
There are two broad categories of screening techniques: ligand-based and structure-based. [ 7 ] The remainder of this page will reflect Figure 1, the flow chart of virtual screening.
Given a set of structurally diverse ligands that binds to a receptor , a model of the receptor can be built by exploiting the collective information contained in such set of ligands. Different computational techniques explore the structural, electronic, molecular shape, and physicochemical similarities of different ligands that could imply their mode of action against a specific molecular receptor or cell lines. [ 8 ] A candidate ligand can then be compared to the pharmacophore model to determine whether it is compatible with it and therefore likely to bind. [ 9 ] Different 2D chemical similarity analysis methods [ 10 ] have been used to scan a databases to find active ligands. Another popular approach used in ligand-based virtual screening consist on searching molecules with shape similar to that of known actives, as such molecules will fit the target's binding site and hence will be likely to bind the target. There are a number of prospective applications of this class of techniques in the literature. [ 11 ] [ 12 ] [ 13 ] Pharmacophoric extensions of these 3D methods are also freely-available as webservers. [ 14 ] [ 15 ] Also shape based virtual screening has gained significant popularity. [ 16 ]
Structure-based virtual screening approach includes different computational techniques that consider the structure of the receptor that is the molecular target of the investigated active ligands. Some of these techniques include molecular docking , structure-based pharmacophore prediction, and molecular dynamics simulations. [ 17 ] [ 18 ] [ 8 ] Molecular docking is the most used structure-based technique, and it applies a scoring function to estimate the fitness of each ligand against the binding site of the macromolecular receptor, helping to choose the ligands with the most high affinity. [ 19 ] [ 20 ] [ 21 ] Currently, there are some webservers oriented to prospective virtual screening. [ 22 ] [ 23 ]
Hybrid methods that rely on structural and ligand similarity were also developed to overcome the limitations of traditional VLS approaches. This methodologies utilizes evolution‐based ligand‐binding information to predict small-molecule binders [ 24 ] [ 25 ] and can employ both global structural similarity and pocket similarity. [ 24 ] A global structural similarity based approach employs both an experimental structure or a predicted protein model to find structural similarity with proteins in the PDB holo‐template library. Upon detecting significant structural similarity, 2D fingerprint based Tanimoto coefficient metric is applied to screen for small-molecules that are similar to ligands extracted from selected holo PDB templates. [ 26 ] [ 27 ] The predictions from this method have been experimentally assessed and shows good enrichment in identifying active small molecules.
The above specified method depends on global structural similarity and is not capable of a priori selecting a particular ligand‐binding site in the protein of interest. Further, since the methods rely on 2D similarity assessment for ligands, they are not capable of recognizing stereochemical similarity of small-molecules that are substantially different but demonstrate geometric shape similarity. To address these concerns, a new pocket centric approach, PoLi, capable of targeting specific binding pockets in holo‐protein templates, was developed and experimentally assessed.
The computation of pair-wise interactions between atoms, which is a prerequisite for the operation of many virtual screening programs, scales by O ( N 2 ) {\displaystyle O(N^{2})} , N is the number of atoms in the system. Due to the quadratic scaling, the computational costs increase quickly.
Ligand-based methods typically require a fraction of a second for a single structure comparison operation. Sometimes a single CPU is enough to perform a large screening within hours. However, several comparisons can be made in parallel in order to expedite the processing of a large database of compounds.
The size of the task requires a parallel computing infrastructure , such as a cluster of Linux systems, running a batch queue processor to handle the work, such as Sun Grid Engine or Torque PBS.
A means of handling the input from large compound libraries is needed. This requires a form of compound database that can be queried by the parallel cluster, delivering compounds in parallel to the various compute nodes. Commercial database engines may be too ponderous, and a high speed indexing engine, such as Berkeley DB , may be a better choice. Furthermore, it may not be efficient to run one comparison per job, because the ramp up time of the cluster nodes could easily outstrip the amount of useful work. To work around this, it is necessary to process batches of compounds in each cluster job, aggregating the results into some kind of log file. A secondary process, to mine the log files and extract high scoring candidates, can then be run after the whole experiment has been run.
The aim of virtual screening is to identify molecules of novel chemical structure that bind to the macromolecular target of interest . Thus, success of a virtual screen is defined in terms of finding interesting new scaffolds rather than the total number of hits. Interpretations of virtual screening accuracy should, therefore, be considered with caution. Low hit rates of interesting scaffolds are clearly preferable over high hit rates of already known scaffolds.
Most tests of virtual screening studies in the literature are retrospective. In these studies, the performance of a VS technique is measured by its ability to retrieve a small set of previously known molecules with affinity to the target of interest (active molecules or just actives) from a library containing a much higher proportion of assumed inactives or decoys. There are several distinct ways to select decoys by matching the properties of the corresponding active molecule [ 28 ] and more recently decoys are also selected in a property-unmatched manner. [ 29 ] The actual impact of decoy selection, either for training or testing purposes, has also been discussed. [ 29 ] [ 30 ]
By contrast, in prospective applications of virtual screening, the resulting hits are subjected to experimental confirmation (e.g., IC 50 measurements). There is consensus that retrospective benchmarks are not good predictors of prospective performance and consequently only prospective studies constitute conclusive proof of the suitability of a technique for a particular target. [ 31 ] [ 32 ] [ 33 ] [ 34 ] [ 35 ]
Virtual screening is a very useful application when it comes to identifying hit molecules as a beginning for medicinal chemistry. As the virtual screening approach begins to become a more vital and substantial technique within the medicinal chemistry industry the approach has had an expeditious increase. [ 36 ]
While not knowing the structure trying to predict how the ligands will bind to the receptor. With the use of pharmacophore features each ligand identified donor, and acceptors. Equating features are overlaid, however given it is unlikely there is a single correct solution. [ 1 ]
This technique is used when merging the results of searches by using unlike reference compounds, same descriptors and coefficient, but different active compounds. This technique is beneficial because it is more efficient than just using a single reference structure along with the most accurate performance when it comes to diverse actives. [ 1 ]
Pharmacophore is an ensemble of steric and electronic features that are needed to have an optimal supramolecular interaction or interactions with a biological target structure in order to precipitate its biological response. Choose a representative as a set of actives, most methods will look for similar bindings. [ 37 ] It is preferred to have multiple rigid molecules and the ligands should be diversified, in other words ensure to have different features that don't occur during the binding phase. [ 1 ]
Shape-based molecular similarity approaches have been established as important and popular virtual screening techniques. At present, the highly optimized screening platform ROCS (Rapid Overlay of Chemical Structures) is considered the de facto industry standard for shape-based, ligand-centric virtual screening. [ 38 ] [ 39 ] [ 40 ] It uses a Gaussian function to define molecular volumes of small organic molecules. The selection of the query conformation is less important, rendering shape-based screening ideal for ligand-based modeling: As the availability of a bioactive conformation for the query is not the limiting factor for screening — it is more the selection of query compound(s) that is decisive for screening performance. [ 16 ] Other shape-based molecular similarity methods such as Autodock-SS have also been developed. [ 41 ]
As an improvement to shape-based similarity methods, field-based methods try to take into account all the fields that influence a ligand-receptor interaction while being agnostic of the chemical structure used as a query. Various other fields are used in these methods, such as electrostatic or hydrophobic fields. [ 42 ] [ 43 ]
Quantitative-structure activity relationship (QSAR) models consist of predictive models based on information extracted from a set of known active and known inactive compounds. [ 44 ] SAR's (structure activity relationship) where data is treated qualitatively and can be used with structural classes and more than one binding mode. Models prioritize compounds for lead discovery. [ 1 ]
Machine learning algorithms have been widely used in virtual screening approaches. Supervised learning techniques use a training and test datasets composed of known active and known inactive compounds. Different ML algorithms have been applied with success in virtual screening strategies, such as recursive partitioning, support vector machines , random forest, k-nearest neighbors and neural networks . [ 45 ] [ 46 ] [ 47 ] These models find the probability that a compound is active and then ranking each compound based on its probability. [ 1 ]
The first machine learning model used on large datasets is the substructure analysis that was created in 1973. Each fragment substructure make a continuous contribution an activity of specific type. [ 1 ] Substructure is a method that overcomes the difficulty of massive dimensionality when it comes to analyzing structures in drug design. An efficient substructure analysis is used for structures that have similarities to a multi-level building or tower. Geometry is used for numbering boundary joints for a given structure in the onset and towards the climax. When the method of special static condensation and substitutions routines are developed this method is proved to be more productive than the previous substructure analysis models. [ 48 ]
Recursively partitioning is method that creates a decision tree using qualitative data. Understanding the way rules break classes up with a low error of misclassification while repeating each step until no sensible splits can be found. However, recursive partitioning can have poor prediction ability potentially creating fine models at the same rate. [ 1 ]
Ligand can bind into an active site within a protein by using a docking search algorithm, and scoring function in order to identify the most likely cause for an individual ligand while assigning a priority order. [ 1 ] [ 49 ]
|
https://en.wikipedia.org/wiki/Virtual_screening
|
Virtual sensing techniques, [ 1 ] also called soft sensing , [ 2 ] proxy sensing , inferential sensing , or surrogate sensing , are used to provide feasible and economical alternatives to costly or impractical physical measurement instrument . A virtual sensing system uses information available from other measurements and process parameters to calculate an estimate of the quantity of interest.
In the field of gas sensors , an array of virtual sensors [ 3 ] can substitute electronic noses . Virtual gas sensors can be obtained by using a single sensor working in dynamic mode, i.e., working in repeated cycles that include a customized range of temperature, voltage, or both, which is equivalent to an array of real sensors. The choice of the temperature or voltage range depends on the gas type and its concentration.
|
https://en.wikipedia.org/wiki/Virtual_sensing
|
A virtual slide is created when glass slides are digitally scanned in their entirety to provide a high resolution digital image using a digital scanning system for the purpose of medical digital image analysis. Digital slides can be retrieved from a storage system, and viewed on a computer screen , by running image management software on a standard web browser , and assessed in exactly the same way as on a microscope . [ 1 ] Digital slides can be used as an alternative to traditional viewing for the purpose of teleconsultation. [ 2 ]
The main virtual slide collection is the " Juan Rosai's collection of surgical pathology seminars ", curated by USCAP .
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virtual_slide
|
In quantum physics , a virtual state is a very short-lived, unobservable quantum state. [ 1 ]
In many quantum processes a virtual state is an intermediate state, sometimes described as "imaginary" [ 2 ] in a multi-step process that mediates otherwise forbidden transitions. Since virtual states are not eigenfunctions of any operator, [ 3 ] normal parameters such as occupation, energy and lifetime need to be qualified. No measurement of a system will show one to be occupied, [ 4 ] but they still have lifetimes derived from uncertainty relations. [ 5 ] [ 6 ] While each virtual state has an associated energy, no direct measurement of its energy is possible [ 7 ] but various approaches have been used to make some measurements (for example see [ 8 ] and related work [ 9 ] [ 10 ] on virtual state spectroscopy) or extract other parameters using measurement techniques that depend upon the virtual state's lifetime. [ 11 ] The concept is quite general and can be used to predict and describe experimental results in many areas including Raman spectroscopy , [ 12 ] non-linear optics generally, [ 5 ] various types of photochemistry , [ 13 ] and nuclear processes. [ 14 ]
|
https://en.wikipedia.org/wiki/Virtual_state
|
In mechanics , virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system . The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements , one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action.
The work of a force on a particle along a virtual displacement is known as the virtual work.
Historically, virtual work and the associated calculus of variations were formulated to analyze systems of rigid bodies, [ 1 ] but they have also been developed for the study of the mechanics of deformable bodies. [ 2 ]
The principle of virtual work had always been used in some form since antiquity in the study of statics. It was used by the Greeks, medieval Arabs and Latins, and Renaissance Italians as "the law of lever". [ 3 ] The idea of virtual work was invoked by many notable physicists of the 17th century, such as Galileo, Descartes, Torricelli, Wallis, and Huygens, in varying degrees of generality, when solving problems in statics. [ 3 ] Working with Leibnizian concepts, Johann Bernoulli systematized the virtual work principle and made explicit the concept of infinitesimal displacement. He was able to solve problems for both rigid bodies as well as fluids. Bernoulli's version of virtual work law appeared in his letter to Pierre Varignon in 1715, which was later published in Varignon's second volume of Nouvelle mécanique ou Statique in 1725. This formulation of the principle is today known as the principle of virtual velocities and is commonly considered as the prototype of the contemporary virtual work principles. [ 3 ] In 1743 D'Alembert published his Traité de Dynamique where he applied the principle of virtual work, based on Bernoulli's work, to solve various problems in dynamics. His idea was to convert a dynamical problem into static problem by introducing inertial force . [ 4 ] In 1768, Lagrange presented the virtual work principle in a more efficient form by introducing generalized coordinates and presented it as an alternative principle of mechanics by which all problems of equilibrium could be solved. A systematic exposition of Lagrange's program of applying this approach to all of mechanics, both static and dynamic, essentially D'Alembert's principle , was given in his Mécanique Analytique of 1788. [ 3 ] Although Lagrange had presented his version of least action principle prior to this work, he recognized the virtual work principle to be more fundamental mainly because it could be assumed alone as the foundation for all mechanics, unlike the modern understanding that least action does not account for non-conservative forces. [ 3 ]
If a force acts on a particle as it moves from point A {\displaystyle A} to point B {\displaystyle B} , then, for each possible trajectory that the particle may take, it is possible to compute the total work done by the force along the path. The principle of virtual work , which is the form of the principle of least action applied to these systems, states that the path actually followed by the particle is the one for which the difference between the work along this path and other nearby paths is zero (to the first order). The formal procedure for computing the difference of functions evaluated on nearby paths is a generalization of the derivative known from differential calculus, and is termed the calculus of variations .
Consider a point particle that moves along a path which is described by a function r ( t ) {\displaystyle \mathbf {r} (t)} from point A {\displaystyle A} , where r ( t = t 0 ) {\displaystyle \mathbf {r} (t=t_{0})} , to point B {\displaystyle B} , where r ( t = t 1 ) {\displaystyle \mathbf {r} (t=t_{1})} . It is possible that the particle moves from A {\displaystyle A} to B {\displaystyle B} along a nearby path described by r ( t ) + δ r ( t ) {\displaystyle \mathbf {r} (t)+\delta \mathbf {r} (t)} , where δ r ( t ) {\displaystyle \delta \mathbf {r} (t)} is called the variation of r ( t ) {\displaystyle \mathbf {r} (t)} . The variation δ r ( t ) {\displaystyle \delta \mathbf {r} (t)} satisfies the requirement δ r ( t 0 ) = δ r ( t 1 ) = 0 {\displaystyle \delta \mathbf {r} (t_{0})=\delta \mathbf {r} (t_{1})=0} . The scalar components of the variation δ r 1 ( t ) {\displaystyle \delta r_{1}(t)} , δ r 2 ( t ) {\displaystyle \delta r_{2}(t)} and δ r 3 ( t ) {\displaystyle \delta r_{3}(t)} are called virtual displacements. This can be generalized to an arbitrary mechanical system defined by the generalized coordinates q i {\displaystyle q_{i}} , i = 1 , 2 , . . . , n {\displaystyle i=1,2,...,n} . In which case, the variation of the trajectory q i ( t ) {\displaystyle q_{i}(t)} is defined by the virtual displacements δ q i {\displaystyle \delta q_{i}} , i = 1 , 2 , . . . , n {\displaystyle i=1,2,...,n} .
Virtual work is the total work done by the applied forces and the inertial forces of a mechanical system as it moves through a set of virtual displacements. When considering forces applied to a body in static equilibrium, the principle of least action requires the virtual work of these forces to be zero.
Consider a particle P that moves from a point A to a point B along a trajectory r ( t ) , while a force F ( r ( t )) is applied to it. The work done by the force F is given by the integral W = ∫ r ( t 0 ) = A r ( t 1 ) = B F ⋅ d r = ∫ t 0 t 1 F ⋅ d r d t d t = ∫ t 0 t 1 F ⋅ v d t , {\displaystyle W=\int _{\mathbf {r} (t_{0})=A}^{\mathbf {r} (t_{1})=B}\mathbf {F} \cdot d\mathbf {r} =\int _{t_{0}}^{t_{1}}\mathbf {F} \cdot {\frac {d\mathbf {r} }{dt}}~dt=\int _{t_{0}}^{t_{1}}\mathbf {F} \cdot \mathbf {v} ~dt,} where d r is the differential element along the curve that is the trajectory of P , and v is its velocity. It is important to notice that the value of the work W depends on the trajectory r ( t ) .
Now consider particle P that moves from point A to point B again, but this time it moves along the nearby trajectory that differs from r ( t ) by the variation δ r ( t ) = ε h ( t ) , where ε is a scaling constant that can be made as small as desired and h ( t ) is an arbitrary function that satisfies h ( t 0 ) = h ( t 1 ) = 0 . Suppose the force F ( r ( t ) + ε h ( t )) is the same as F ( r ( t )) . The work done by the force is given by the integral W ¯ = ∫ r ( t 0 ) = A r ( t 1 ) = B F ⋅ d ( r + ε h ) = ∫ t 0 t 1 F ⋅ d ( r ( t ) + ε h ( t ) ) d t d t = ∫ t 0 t 1 F ⋅ ( v + ε h ˙ ) d t . {\displaystyle {\bar {W}}=\int _{\mathbf {r} (t_{0})=A}^{\mathbf {r} (t_{1})=B}\mathbf {F} \cdot d(\mathbf {r} +\varepsilon \mathbf {h} )=\int _{t_{0}}^{t_{1}}\mathbf {F} \cdot {\frac {d(\mathbf {r} (t)+\varepsilon \mathbf {h} (t))}{dt}}~dt=\int _{t_{0}}^{t_{1}}\mathbf {F} \cdot (\mathbf {v} +\varepsilon {\dot {\mathbf {h} }})~dt.} The variation of the work δW associated with this nearby path, known as the virtual work , can be computed to be δ W = W ¯ − W = ∫ t 0 t 1 ( F ⋅ ε h ˙ ) d t . {\displaystyle \delta W={\bar {W}}-W=\int _{t_{0}}^{t_{1}}(\mathbf {F} \cdot \varepsilon {\dot {\mathbf {h} }})~dt.}
If there are no constraints on the motion of P , then 3 parameters are needed to completely describe P ' s position at any time t . If there are k ( k ≤ 3 ) constraint forces, then n = (3 − k ) parameters are needed. Hence, we can define n generalized coordinates q i ( t ) ( i = 1,..., n ), and express r ( t ) and δ r = ε h ( t ) in terms of the generalized coordinates. That is, r ( t ) = r ( q 1 , q 2 , … , q n ; t ) , {\displaystyle \mathbf {r} (t)=\mathbf {r} (q_{1},q_{2},\dots ,q_{n};t),} h ( t ) = h ( q 1 , q 2 , … , q n ; t ) . {\displaystyle \mathbf {h} (t)=\mathbf {h} (q_{1},q_{2},\dots ,q_{n};t).} Then, the derivative of the variation δ r = ε h ( t ) is given by d d t δ r = d d t ε h = ∑ i = 1 n ∂ h ∂ q i ε q ˙ i , {\displaystyle {\frac {d}{dt}}\delta \mathbf {r} ={\frac {d}{dt}}\varepsilon \mathbf {h} =\sum _{i=1}^{n}{\frac {\partial \mathbf {h} }{\partial q_{i}}}\varepsilon {\dot {q}}_{i},} then we have δ W = ∫ t 0 t 1 ( ∑ i = 1 n F ⋅ ∂ h ∂ q i ε q ˙ i ) d t = ∑ i = 1 n ( ∫ t 0 t 1 F ⋅ ∂ h ∂ q i ε q ˙ i d t ) . {\displaystyle \delta W=\int _{t_{0}}^{t_{1}}\left(\sum _{i=1}^{n}\mathbf {F} \cdot {\frac {\partial \mathbf {h} }{\partial q_{i}}}\varepsilon {\dot {q}}_{i}\right)dt=\sum _{i=1}^{n}\left(\int _{t_{0}}^{t_{1}}\mathbf {F} \cdot {\frac {\partial \mathbf {h} }{\partial q_{i}}}\varepsilon {\dot {q}}_{i}~dt\right).}
The requirement that the virtual work be zero for an arbitrary variation δ r ( t ) = ε h ( t ) is equivalent to the set of requirements Q i = F ⋅ ∂ h ∂ q i = 0 , i = 1 , … , n . {\displaystyle Q_{i}=\mathbf {F} \cdot {\frac {\partial \mathbf {h} }{\partial q_{i}}}=0,\quad i=1,\ldots ,n.} The terms Q i are called the generalized forces associated with the virtual displacement δ r .
Static equilibrium is a state in which the net force and net torque acted upon the system is zero. In other words, both linear momentum and angular momentum of the system are conserved. The principle of virtual work states that the virtual work of the applied forces is zero for all virtual movements of the system from static equilibrium . This principle can be generalized such that three dimensional rotations are included: the virtual work of the applied forces and applied moments is zero for all virtual movements of the system from static equilibrium. That is δ W = ∑ i = 1 m F i ⋅ δ r i + ∑ j = 1 n M j ⋅ δ ϕ j = 0 , {\displaystyle \delta W=\sum _{i=1}^{m}\mathbf {F} _{i}\cdot \delta \mathbf {r} _{i}+\sum _{j=1}^{n}\mathbf {M} _{j}\cdot \delta \mathbf {\phi } _{j}=0,} where F i , i = 1, 2, ..., m and M j , j = 1, 2, ..., n are the applied forces and applied moments, respectively, and δ r i , i = 1, 2, ..., m and δ φ j , j = 1, 2, ..., n are the virtual displacements and virtual rotations , respectively.
Suppose the system consists of N particles, and it has f ( f ≤ 6 N ) degrees of freedom . It is sufficient to use only f coordinates to give a complete description of the motion of the system, so f generalized coordinates q k , k = 1, 2, ..., f are defined such that the virtual movements can be expressed in terms of these generalized coordinates . That is, δ r i ( q 1 , q 2 , … , q f ; t ) , i = 1 , 2 , … , m ; {\displaystyle \delta \mathbf {r} _{i}(q_{1},q_{2},\dots ,q_{f};t),\quad i=1,2,\dots ,m;} δ ϕ j ( q 1 , q 2 , … , q f ; t ) , j = 1 , 2 , … , n . {\displaystyle \delta \phi _{j}(q_{1},q_{2},\dots ,q_{f};t),\quad j=1,2,\dots ,n.}
The virtual work can then be reparametrized by the generalized coordinates : δ W = ∑ k = 1 f [ ( ∑ i = 1 m F i ⋅ ∂ r i ∂ q k + ∑ j = 1 n M j ⋅ ∂ ϕ j ∂ q k ) δ q k ] = ∑ k = 1 f Q k δ q k , {\displaystyle \delta W=\sum _{k=1}^{f}\left[\left(\sum _{i=1}^{m}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{k}}}+\sum _{j=1}^{n}\mathbf {M} _{j}\cdot {\frac {\partial \mathbf {\phi } _{j}}{\partial q_{k}}}\right)\delta q_{k}\right]=\sum _{k=1}^{f}Q_{k}\delta q_{k},} where the generalized forces Q k are defined as Q k = ∑ i = 1 m F i ⋅ ∂ r i ∂ q k + ∑ j = 1 n M j ⋅ ∂ ϕ j ∂ q k , k = 1 , 2 , … , f . {\displaystyle Q_{k}=\sum _{i=1}^{m}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{k}}}+\sum _{j=1}^{n}\mathbf {M} _{j}\cdot {\frac {\partial \mathbf {\phi } _{j}}{\partial q_{k}}},\quad k=1,2,\dots ,f.} Kane [ 5 ] shows that these generalized forces can also be formulated in terms of the ratio of time derivatives. That is, Q k = ∑ i = 1 m F i ⋅ ∂ v i ∂ q ˙ k + ∑ j = 1 n M j ⋅ ∂ ω j ∂ q ˙ k , k = 1 , 2 , … , f . {\displaystyle Q_{k}=\sum _{i=1}^{m}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {v} _{i}}{\partial {\dot {q}}_{k}}}+\sum _{j=1}^{n}\mathbf {M} _{j}\cdot {\frac {\partial \mathbf {\omega } _{j}}{\partial {\dot {q}}_{k}}},\quad k=1,2,\dots ,f.}
The principle of virtual work requires that the virtual work done on a system by the forces F i and moments M j vanishes if it is in equilibrium . Therefore, the generalized forces Q k are zero, that is δ W = 0 ⇒ Q k = 0 k = 1 , 2 , … , f . {\displaystyle \delta W=0\quad \Rightarrow \quad Q_{k}=0\quad k=1,2,\dots ,f.}
An important benefit of the principle of virtual work is that only forces that do work as the system moves through a virtual displacement are needed to determine the mechanics of the system. There are many forces in a mechanical system that do no work during a virtual displacement , which means that they need not be considered in this analysis. The two important examples are (i) the internal forces in a rigid body , and (ii) the constraint forces at an ideal joint .
Lanczos [ 1 ] presents this as the postulate: "The virtual work of the forces of reaction is always zero for any virtual displacement which is in harmony with the given kinematic constraints." The argument is as follows. The principle of virtual work states that in equilibrium the virtual work of the forces applied to a system is zero. Newton's laws state that at equilibrium the applied forces are equal and opposite to the reaction, or constraint forces. This means the virtual work of the constraint forces must be zero as well.
A lever is modeled as a rigid bar connected to a ground frame by a hinged joint called a fulcrum. The lever is operated by applying an input force F A at a point A located by the coordinate vector r A on the bar. The lever then exerts an output force F B at the point B located by r B . The rotation of the lever about the fulcrum P is defined by the rotation angle θ .
Let the coordinate vector of the point P that defines the fulcrum be r P , and introduce the lengths a = | r A − r P | , b = | r B − r P | , {\displaystyle a=|\mathbf {r} _{A}-\mathbf {r} _{P}|,\quad b=|\mathbf {r} _{B}-\mathbf {r} _{P}|,} which are the distances from the fulcrum to the input point A and to the output point B , respectively.
Now introduce the unit vectors e A and e B from the fulcrum to the point A and B , so r A − r P = a e A , r B − r P = b e B . {\displaystyle \mathbf {r} _{A}-\mathbf {r} _{P}=a\mathbf {e} _{A},\quad \mathbf {r} _{B}-\mathbf {r} _{P}=b\mathbf {e} _{B}.} This notation allows us to define the velocity of the points A and B as v A = θ ˙ a e A ⊥ , v B = θ ˙ b e B ⊥ , {\displaystyle \mathbf {v} _{A}={\dot {\theta }}a\mathbf {e} _{A}^{\perp },\quad \mathbf {v} _{B}={\dot {\theta }}b\mathbf {e} _{B}^{\perp },} where e A ⊥ and e B ⊥ are unit vectors perpendicular to e A and e B , respectively.
The angle θ is the generalized coordinate that defines the configuration of the lever, therefore using the formula above for forces applied to a one degree-of-freedom mechanism, the generalized force is given by Q = F A ⋅ ∂ v A ∂ θ ˙ − F B ⋅ ∂ v B ∂ θ ˙ = a ( F A ⋅ e A ⊥ ) − b ( F B ⋅ e B ⊥ ) . {\displaystyle Q=\mathbf {F} _{A}\cdot {\frac {\partial \mathbf {v} _{A}}{\partial {\dot {\theta }}}}-\mathbf {F} _{B}\cdot {\frac {\partial \mathbf {v} _{B}}{\partial {\dot {\theta }}}}=a(\mathbf {F} _{A}\cdot \mathbf {e} _{A}^{\perp })-b(\mathbf {F} _{B}\cdot \mathbf {e} _{B}^{\perp }).}
Now, denote as F A and F B the components of the forces that are perpendicular to the radial segments PA and PB . These forces are given by F A = F A ⋅ e A ⊥ , F B = F B ⋅ e B ⊥ . {\displaystyle F_{A}=\mathbf {F} _{A}\cdot \mathbf {e} _{A}^{\perp },\quad F_{B}=\mathbf {F} _{B}\cdot \mathbf {e} _{B}^{\perp }.} This notation and the principle of virtual work yield the formula for the generalized force as Q = a F A − b F B = 0. {\displaystyle Q=aF_{A}-bF_{B}=0.}
The ratio of the output force F B to the input force F A is the mechanical advantage of the lever, and is obtained from the principle of virtual work as M A = F B F A = a b . {\displaystyle MA={\frac {F_{B}}{F_{A}}}={\frac {a}{b}}.}
This equation shows that if the distance a from the fulcrum to the point A where the input force is applied is greater than the distance b from fulcrum to the point B where the output force is applied, then the lever amplifies the input force. If the opposite is true that the distance from the fulcrum to the input point A is less than from the fulcrum to the output point B , then the lever reduces the magnitude of the input force.
This is the law of the lever , which was proven by Archimedes using geometric reasoning. [ 6 ]
A gear train is formed by mounting gears on a frame so that the teeth of the gears engage. Gear teeth are designed to ensure the pitch circles of engaging gears roll on each other without slipping, this provides a smooth transmission of rotation from one gear to the next. For this analysis, we consider a gear train that has one degree-of-freedom, which means the angular rotation of all the gears in the gear train are defined by the angle of the input gear.
The size of the gears and the sequence in which they engage define the ratio of the angular velocity ω A of the input gear to the angular velocity ω B of the output gear, known as the speed ratio, or gear ratio , of the gear train. Let R be the speed ratio, then ω A ω B = R . {\displaystyle {\frac {\omega _{A}}{\omega _{B}}}=R.}
The input torque T A acting on the input gear G A is transformed by the gear train into the output torque T B exerted by the output gear G B . If we assume, that the gears are rigid and that there are no losses in the engagement of the gear teeth, then the principle of virtual work can be used to analyze the static equilibrium of the gear train.
Let the angle θ of the input gear be the generalized coordinate of the gear train, then the speed ratio R of the gear train defines the angular velocity of the output gear in terms of the input gear, that is ω A = ω , ω B = ω / R . {\displaystyle \omega _{A}=\omega ,\quad \omega _{B}=\omega /R.}
The formula above for the principle of virtual work with applied torques yields the generalized force Q = T A ∂ ω A ∂ ω − T B ∂ ω B ∂ ω = T A − T B / R = 0. {\displaystyle Q=T_{A}{\frac {\partial \omega _{A}}{\partial \omega }}-T_{B}{\frac {\partial \omega _{B}}{\partial \omega }}=T_{A}-T_{B}/R=0.}
The mechanical advantage of the gear train is the ratio of the output torque T B to the input torque T A , and the above equation yields M A = T B T A = R . {\displaystyle MA={\frac {T_{B}}{T_{A}}}=R.}
Thus, the speed ratio of a gear train also defines its mechanical advantage. This shows that if the input gear rotates faster than the output gear, then the gear train amplifies the input torque. And, if the input gear rotates slower than the output gear, then the gear train reduces the input torque.
If the principle of virtual work for applied forces is used on individual particles of a rigid body , the principle can be generalized for a rigid body: When a rigid body that is in equilibrium is subject to virtual compatible displacements, the total virtual work of all external forces is zero; and conversely, if the total virtual work of all external forces acting on a rigid body is zero then the body is in equilibrium .
If a system is not in static equilibrium, D'Alembert showed that by introducing the acceleration terms of Newton's laws as inertia forces, this approach is generalized to define dynamic equilibrium. The result is D'Alembert's form of the principle of virtual work, which is used to derive the equations of motion for a mechanical system of rigid bodies.
The expression compatible displacements means that the particles remain in contact and displace together so that the work done by pairs of action/reaction inter-particle forces cancel out. Various forms of this principle have been credited to Johann (Jean) Bernoulli (1667–1748) and Daniel Bernoulli (1700–1782).
Let a mechanical system be constructed from n rigid bodies, B i , i=1,...,n, and let the resultant of the applied forces on each body be the force-torque pairs, F i and T i , i = 1,..., n . Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity V i and angular velocities ω i , i =1,..., n , for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom .
Consider a single rigid body which moves under the action of a resultant force F and torque T , with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force Q* associated with the generalized coordinate q is given by Q ∗ = − ( M A ) ⋅ ∂ V ∂ q ˙ − ( [ I R ] α + ω × [ I R ] ω ) ⋅ ∂ ω ∂ q ˙ . {\displaystyle Q^{*}=-(M\mathbf {A} )\cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}-([I_{R}]\alpha +\omega \times [I_{R}]\omega )\cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}.} This inertia force can be computed from the kinetic energy of the rigid body, T = 1 2 M V ⋅ V + 1 2 ω ⋅ [ I R ] ω , {\displaystyle T={\frac {1}{2}}M\mathbf {V} \cdot \mathbf {V} +{\frac {1}{2}}{\boldsymbol {\omega }}\cdot [I_{R}]{\boldsymbol {\omega }},} by using the formula Q ∗ = − ( d d t ∂ T ∂ q ˙ − ∂ T ∂ q ) . {\displaystyle Q^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}}}-{\frac {\partial T}{\partial q}}\right).}
A system of n rigid bodies with m generalized coordinates has the kinetic energy T = ∑ i = 1 n ( 1 2 M V i ⋅ V i + 1 2 ω i ⋅ [ I R ] ω i ) , {\displaystyle T=\sum _{i=1}^{n}\left({\frac {1}{2}}M\mathbf {V} _{i}\cdot \mathbf {V} _{i}+{\frac {1}{2}}{\boldsymbol {\omega }}_{i}\cdot [I_{R}]{\boldsymbol {\omega }}_{i}\right),} which can be used to calculate the m generalized inertia forces [ 7 ] Q j ∗ = − ( d d t ∂ T ∂ q ˙ j − ∂ T ∂ q j ) , j = 1 , … , m . {\displaystyle Q_{j}^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}\right),\quad j=1,\ldots ,m.}
D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that δ W = ( Q 1 + Q 1 ∗ ) δ q 1 + ⋯ + ( Q m + Q m ∗ ) δ q m = 0 , {\displaystyle \delta W=(Q_{1}+Q_{1}^{*})\delta q_{1}+\dots +(Q_{m}+Q_{m}^{*})\delta q_{m}=0,} for any set of virtual displacements δq j . This condition yields m equations, Q j + Q j ∗ = 0 , j = 1 , … , m , {\displaystyle Q_{j}+Q_{j}^{*}=0,\quad j=1,\ldots ,m,} which can also be written as d d t ∂ T ∂ q ˙ j − ∂ T ∂ q j = Q j , j = 1 , … , m . {\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=Q_{j},\quad j=1,\ldots ,m.} The result is a set of m equations of motion that define the dynamics of the rigid body system, known as Lagrange's equations or the generalized equations of motion .
If the generalized forces Q j are derivable from a potential energy V ( q 1 ,..., q m ), then these equations of motion take the form d d t ∂ T ∂ q ˙ j − ∂ T ∂ q j = − ∂ V ∂ q j , j = 1 , … , m . {\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.}
In this case, introduce the Lagrangian , L = T − V , so these equations of motion become d d t ∂ L ∂ q ˙ j − ∂ L ∂ q j = 0 j = 1 , … , m . {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}-{\frac {\partial L}{\partial q_{j}}}=0\quad j=1,\ldots ,m.} These are known as the Euler-Lagrange equations for a system with m degrees of freedom, or Lagrange's equations of the second kind .
Consider now the free body diagram of a deformable body , which is composed of an infinite number of differential cubes. Let's define two unrelated states for the body:
The superscript * emphasizes that the two states are unrelated. Other than the above stated conditions, there is no need to specify if any of the states are real or virtual.
Imagine now that the forces and stresses in the σ {\displaystyle {\boldsymbol {\sigma }}} -State undergo the displacements and deformations in the ϵ {\displaystyle {\boldsymbol {\epsilon }}} -State: We can compute the total virtual (imaginary) work done by all forces acting on the faces of all cubes in two different ways:
Equating the two results leads to the principle of virtual work for a deformable body:
where the total external virtual work is done by T and f . Thus,
The right-hand-side of ( d , e ) is often called the internal virtual work. The principle of virtual work then states: External virtual work is equal to internal virtual work when equilibrated forces and stresses undergo unrelated but consistent displacements and strains . It includes the principle of virtual work for rigid bodies as a special case where the internal virtual work is zero.
We start by looking at the total work done by surface traction on the body going through the specified deformation: ∫ S u ⋅ T d S = ∫ S u ⋅ σ ⋅ n d S {\displaystyle \int _{S}\mathbf {u} \cdot \mathbf {T} dS=\int _{S}\mathbf {u} \cdot {\boldsymbol {\sigma }}\cdot \mathbf {n} dS}
Applying divergence theorem to the right hand side yields: ∫ S u ⋅ σ ⋅ n d S = ∫ V ∇ ⋅ ( u ⋅ σ ) d V {\displaystyle \int _{S}\mathbf {u\cdot {\boldsymbol {\sigma }}\cdot n} dS=\int _{V}\nabla \cdot \left(\mathbf {u} \cdot {\boldsymbol {\sigma }}\right)dV}
Now switch to indicial notation for the ease of derivation. ∫ V ∇ ⋅ ( u ⋅ σ ) d V = ∫ V ∂ ∂ x j ( u i σ i j ) d V = ∫ V ( ∂ u i ∂ x j σ i j + u i ∂ σ i j ∂ x j ) d V {\displaystyle {\begin{aligned}\int _{V}\nabla \cdot \left(\mathbf {u} \cdot {\boldsymbol {\sigma }}\right)dV&=\int _{V}{\frac {\partial }{\partial x_{j}}}\left(u_{i}\sigma _{ij}\right)dV\\&=\int _{V}\left({\frac {\partial u_{i}}{\partial x_{j}}}\sigma _{ij}+u_{i}{\frac {\partial \sigma _{ij}}{\partial x_{j}}}\right)dV\end{aligned}}}
To continue our derivation, we substitute in the equilibrium equation ∂ σ i j ∂ x j + f i = 0 {\displaystyle {\frac {\partial \sigma _{ij}}{\partial x_{j}}}+f_{i}=0} . Then ∫ V ( ∂ u i ∂ x j σ i j + u i ∂ σ i j ∂ x j ) d V = ∫ V ( ∂ u i ∂ x j σ i j − u i f i ) d V {\displaystyle \int _{V}\left({\frac {\partial u_{i}}{\partial x_{j}}}\sigma _{ij}+u_{i}{\frac {\partial \sigma _{ij}}{\partial x_{j}}}\right)dV=\int _{V}\left({\frac {\partial u_{i}}{\partial x_{j}}}\sigma _{ij}-u_{i}f_{i}\right)dV}
The first term on the right hand side needs to be broken into a symmetric part and a skew part as follows: ∫ V ( ∂ u i ∂ x j σ i j − u i f i ) d V = ∫ V ( 1 2 [ ( ∂ u i ∂ x j + ∂ u j ∂ x i ) + ( ∂ u i ∂ x j − ∂ u j ∂ x i ) ] σ i j − u i f i ) d V = ∫ V ( [ ϵ i j + 1 2 ( ∂ u i ∂ x j − ∂ u j ∂ x i ) ] σ i j − u i f i ) d V = ∫ V ( ϵ i j σ i j − u i f i ) d V = ∫ V ( ϵ : σ − u ⋅ f ) d V {\displaystyle {\begin{aligned}\int _{V}\left({\frac {\partial u_{i}}{\partial x_{j}}}\sigma _{ij}-u_{i}f_{i}\right)dV&=\int _{V}\left({\frac {1}{2}}\left[\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}\right)+\left({\frac {\partial u_{i}}{\partial x_{j}}}-{\frac {\partial u_{j}}{\partial x_{i}}}\right)\right]\sigma _{ij}-u_{i}f_{i}\right)dV\\&=\int _{V}\left(\left[\epsilon _{ij}+{\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}-{\frac {\partial u_{j}}{\partial x_{i}}}\right)\right]\sigma _{ij}-u_{i}f_{i}\right)dV\\&=\int _{V}\left(\epsilon _{ij}\sigma _{ij}-u_{i}f_{i}\right)dV\\&=\int _{V}\left({\boldsymbol {\epsilon }}:{\boldsymbol {\sigma }}-\mathbf {u} \cdot \mathbf {f} \right)dV\end{aligned}}} where ϵ {\displaystyle {\boldsymbol {\epsilon }}} is the strain that is consistent with the specified displacement field. The 2nd to last equality comes from the fact that the stress matrix is symmetric and that the product of a skew matrix and a symmetric matrix is zero.
Now recap. We have shown through the above derivation that ∫ S u ⋅ T d S = ∫ V ϵ : σ d V − ∫ V u ⋅ f d V {\displaystyle \int _{S}\mathbf {u\cdot T} dS=\int _{V}{\boldsymbol {\epsilon }}:{\boldsymbol {\sigma }}dV-\int _{V}\mathbf {u} \cdot \mathbf {f} dV}
Move the 2nd term on the right hand side of the equation to the left: ∫ S u ⋅ T d S + ∫ V u ⋅ f d V = ∫ V ϵ : σ d V {\displaystyle \int _{S}\mathbf {u\cdot T} dS+\int _{V}\mathbf {u} \cdot \mathbf {f} dV=\int _{V}{\boldsymbol {\epsilon }}:{\boldsymbol {\sigma }}dV}
The physical interpretation of the above equation is, the External virtual work is equal to internal virtual work when equilibrated forces and stresses undergo unrelated but consistent displacements and strains .
For practical applications:
These two general scenarios give rise to two often stated variational principles. They are valid irrespective of material behaviour.
Depending on the purpose, we may specialize the virtual work equation. For example, to derive the principle of virtual displacements in variational notations for supported bodies, we specify:
The virtual work equation then becomes the principle of virtual displacements:
This relation is equivalent to the set of equilibrium equations written for a differential element in the deformable body as well as of the stress boundary conditions on the part S t {\displaystyle S_{t}} of the surface. Conversely, ( f ) can be reached, albeit in a non-trivial manner, by starting with the differential equilibrium equations and the stress boundary conditions on S t {\displaystyle S_{t}} , and proceeding in the manner similar to ( a ) and ( b ).
Since virtual displacements are automatically compatible when they are expressed in terms of continuous , single-valued functions , we often mention only the need for consistency between strains and displacements. The virtual work principle is also valid for large real displacements; however, Eq.( f ) would then be written using more complex measures of stresses and strains.
Here, we specify:
The virtual work equation becomes the principle of virtual forces:
This relation is equivalent to the set of strain-compatibility equations as well as of the displacement boundary conditions on the part S u {\displaystyle S_{u}} . It has another name: the principle of complementary virtual work.
A specialization of the principle of virtual forces is the unit dummy force method , which is very useful for computing displacements in structural systems. According to D'Alembert's principle , inclusion of inertial forces as additional body forces will give the virtual work equation applicable to dynamical systems. More generalized principles can be derived by:
These are described in some of the references.
Among the many energy principles in structural mechanics , the virtual work principle deserves a special place due to its generality that leads to powerful applications in structural analysis , solid mechanics , and finite element method in structural mechanics .
|
https://en.wikipedia.org/wiki/Virtual_work
|
In computing, virtualization (abbreviated v12n ) is a series of technologies that allows dividing of physical computing resources into a series of virtual machines , operating systems , processes or containers. [ 1 ] Virtualization began in the 1960s with IBM CP/CMS . [ 1 ] The control program CP provided each user with a simulated stand-alone System/360 computer.
In hardware virtualization , the host machine is the machine that is used by the virtualization and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or virtual machine monitor . [ 2 ] Hardware virtualization is not the same as hardware emulation . Hardware-assisted virtualization facilitates building a virtual machine monitor and allows guest OSes to be run in isolation.
Desktop virtualization is the concept of separating the logical desktop from the physical machine.
Operating-system-level virtualization, also known as containerization , refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances.
The usual goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization.
A form of virtualization was first demonstrated with IBM's CP-40 research system in 1967, then distributed via open source in CP/CMS in 1967–1972, and re-implemented in IBM's VM family from 1972 to the present. Each CP/CMS user was provided a simulated, stand-alone computer. Each such virtual machine had the complete capabilities of the underlying machine, and (for its user) the virtual machine was indistinguishable from a private system. This simulation was comprehensive, and was based on the Principles of Operation manual for the hardware. It thus included such elements as an instruction set, main memory, interrupts, exceptions, and device access. The result was a single machine that could be multiplexed among many users.
Hardware-assisted virtualization first appeared on the IBM System/370 in 1972, for use with VM/370 , the first virtual machine operating system. IBM added virtual memory hardware to the System/370 series in 1972 which is not the same as Intel VT-x Rings providing a higher privilege level for Hypervisor to properly control Virtual Machines requiring full access to Supervisor and Program or User modes.
With the increasing demand for high-definition computer graphics (e.g. CAD ), virtualization of mainframes lost some attention in the late 1970s, when the upcoming minicomputers fostered resource allocation through distributed computing , encompassing the commoditization of microcomputers .
The increase in compute capacity per x86 server (and in particular the substantial increase in modern networks' bandwidths) rekindled interest in data-center based computing which is based on virtualization techniques. The primary driver was the potential for server consolidation: virtualization allowed a single server to cost-efficiently consolidate compute power on multiple underutilized dedicated servers. The most visible hallmark of a return to the roots of computing is cloud computing , which is a synonym for data center based computing (or mainframe-like computing) through high bandwidth networks. It is closely connected to virtualization.
The initial implementation x86 architecture did not meet the Popek and Goldberg virtualization requirements to achieve "classical virtualization":
This made it difficult to implement a virtual machine monitor for this type of processor. Specific limitations included the inability to trap on some privileged instructions. [ 3 ] Therefore, to compensate for these architectural limitations, designers accomplished virtualization of the x86 architecture through two methods: full virtualization or paravirtualization . [ 4 ] Both create the illusion of physical hardware to achieve the goal of operating system independence from the hardware but present some trade-offs in performance and complexity.
Full virtualization was not fully available on the x86 platform prior to 2005. Many platform hypervisors for the x86 platform came very close and claimed full virtualization (such as Adeos , Mac-on-Linux, Parallels Desktop for Mac , Parallels Workstation , VMware Workstation , VMware Server (formerly GSX Server), VirtualBox , Win4BSD, and Win4Lin Pro ).
In 2005 and 2006, Intel and AMD (working independently) created new processor extensions to the x86 architecture called Intel VT-x and AMD-V, respectively. On the Itanium architecture, hardware-assisted virtualization is known as VT-i. The first generation of x86 processors to support these extensions were released in late 2005 early 2006:
Hardware virtualization (or platform virtualization) pools computing resources across one or more virtual machines . A virtual machine implements functionality of a (physical) computer with an operating system. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or virtual machine monitor . [ 2 ]
Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Arch Linux may host a virtual machine that looks like a computer with the Microsoft Windows operating system; Windows-based software can be run on the virtual machine. [ 5 ] [ 6 ]
Different types of hardware virtualization include:
Full virtualization employs techniques that pools physical computer resources into one or more instances; each running a virtual environment where any software or operating system capable of execution on the raw hardware can be run in the virtual machine. Two common full virtualization techniques are typically used: (a) binary translation and (b) hardware-assisted full virtualization. [ 1 ] Binary translation automatically modifies the software on-the-fly to replace instructions that "pierce the virtual machine" with a different, virtual machine safe sequence of instructions. [ 7 ] Hardware-assisted virtualization allows guest operating systems to be run in isolation with virtually no modification to the (guest) operating system.
Full virtualization requires that every salient feature of the hardware be reflected into one of several virtual machines – including the full instruction set, input/output operations, interrupts, memory access, and whatever other elements are used by the software that runs on the bare machine , and that is intended to run in a virtual machine.
This approach was pioneered in 1966 with the IBM CP-40 and CP-67 , predecessors of the VM family.
In binary translation , instructions are translated to match the emulated hardware architecture. [ 1 ] A piece of hardware imitates another while in hardware assisted virtualization, a hypervisor (a piece of software) imitates a particular piece of computer hardware or the entire computer. Furthermore, a hypervisor is not the same as an emulator ; both are computer programs that imitate hardware, but their domain of use in language differs. [ 8 ]
Hardware-assisted virtualization (or accelerated virtualization; Xen calls it hardware virtual machine (HVM), and Virtual Iron calls it native virtualization) is a way of improving overall efficiency of hardware virtualization using help from the host processors. A full virtualization is used to emulate a complete hardware environment, or virtual machine , in which an unmodified guest operating system (using the same instruction set as the host machine) effectively executes in complete isolation.
Hardware-assisted virtualization was first introduced on the IBM 308X processors in 1980, with the Start Interpretive Execution (SIE) instruction. [ 9 ] It was added to x86 processors ( Intel VT-x , AMD-V or VIA VT ) in 2005, 2006 and 2010 [ 10 ] respectively.
IBM offers hardware virtualization for its IBM Power Systems hardware for AIX , Linux and IBM i , and for its IBM Z mainframes . IBM refers to its specific form of hardware virtualization as "logical partition", or more commonly as LPAR .
Hardware-assisted virtualization reduces the maintenance overhead of paravirtualization as it reduces (ideally, eliminates) the changes needed in the guest operating system. It is also considerably easier to obtain better performance.
Paravirtualization is a virtualization technique that presents a software interface to the virtual machines which is similar, yet not identical, to the underlying hardware–software interface. Paravirtualization improves performance and efficiency, compared to full virtualization, by having the guest operating system communicate with the hypervisor. By allowing the guest operating system to indicate its intent to the hypervisor, each can cooperate to obtain better performance when running in a virtual machine.
The intent of the modified interface is to reduce the portion of the guest's execution time spent performing operations which are substantially more difficult to run in a virtual environment compared to a non-virtualized environment. The paravirtualization provides specially defined 'hooks' to allow the guest(s) and host to request and acknowledge these tasks, which would otherwise be executed in the virtual domain (where execution performance is worse). A successful paravirtualized platform may allow the virtual machine monitor (VMM) to be simpler (by relocating execution of critical tasks from the virtual domain to the host domain), and/or reduce the overall performance degradation of machine execution inside the virtual guest.
Paravirtualization requires the guest operating system to be explicitly ported for the para- API – a conventional OS distribution that is not paravirtualization-aware cannot be run on top of a paravirtualizing VMM. However, even in cases where the operating system cannot be modified, components may be available that enable many of the significant performance advantages of paravirtualization. For example, the Xen Windows GPLPV project provides a kit of paravirtualization-aware device drivers, that are intended to be installed into a Microsoft Windows virtual guest running on the Xen hypervisor. [ 11 ] Such applications tend to be accessible through the paravirtual machine interface environment. This ensures run-mode compatibility across multiple encryption algorithm models, allowing seamless integration within the paravirtual framework. [ 12 ]
The term "paravirtualization" was first used in the research literature in association with the Denali Virtual Machine Manager. [ 13 ] The term is also used to describe the Xen , L4 , TRANGO , VMware , Wind River and XtratuM hypervisors . All these projects use or can use paravirtualization techniques to support high performance virtual machines on x86 hardware by implementing a virtual machine that does not implement the hard-to-virtualize parts of the actual x86 instruction set. [ 14 ]
In 2005, VMware proposed a paravirtualization interface, the Virtual Machine Interface (VMI), as a communication mechanism between the guest operating system and the hypervisor. This interface enabled transparent paravirtualization in which a single binary version of the operating system can run either on native hardware or on a hypervisor in paravirtualized mode.
The first appearance of paravirtualization support in Linux occurred with the merge of the ppc64 port in 2002, [ 15 ] which supported running Linux as a paravirtualized guest on IBM pSeries (RS/6000) and iSeries (AS/400) hardware.
At the USENIX conference in 2006 in Boston, Massachusetts , a number of Linux development vendors (including IBM, VMware, Xen, and Red Hat) collaborated on an alternative form of paravirtualization, initially developed by the Xen group, called "paravirt-ops". [ 16 ] The paravirt-ops code (often shortened to pv-ops) was included in the mainline Linux kernel as of the 2.6.23 version, and provides a hypervisor-agnostic interface between the hypervisor and guest kernels. Distribution support for pv-ops guest kernels appeared starting with Ubuntu 7.04 and RedHat 9. Xen hypervisors based on any 2.6.24 or later kernel support pv-ops guests, as does VMware's Workstation product beginning with version 6. [ 17 ]
Hybrid virtualization combines full virtualization techniques with paravirtualized drivers to overcome limitations with hardware-assisted full virtualization. [ 18 ]
A hardware-assisted full virtualization approach uses an unmodified guest operating system that involves many VM traps producing high CPU overheads limiting scalability and the efficiency of server consolidation. [ 19 ] The hybrid virtualization approach overcomes this problem.
Desktop virtualization separates the logical desktop from the physical machine.
One form of desktop virtualization, virtual desktop infrastructure (VDI), can be thought of as a more advanced form of hardware virtualization. Rather than interacting with a host computer directly via a keyboard, mouse, and monitor, the user interacts with the host computer using another desktop computer or a mobile device by means of a network connection, such as a LAN , Wireless LAN or even the Internet . In addition, the host computer in this scenario becomes a server computer capable of hosting multiple virtual machines at the same time for multiple users. [ 20 ]
Companies like HP and IBM provide a hybrid VDI model with a range of virtualization software and delivery models to improve upon the limitations of distributed client computing . [ 21 ] Selected client environments move workloads from PCs and other devices to data center servers, creating well-managed virtual clients, with applications and client operating environments hosted on servers and storage in the data center. For users, this means they can access their desktop from any location, without being tied to a single client device. Since the resources are centralized, users moving between work locations can still access the same client environment with their applications and data. [ 21 ] For IT administrators, this means a more centralized, efficient client environment that is easier to maintain and able to more quickly respond to the changing needs of the user and business. [ 22 ] Another form, session virtualization, allows multiple users to connect and log into a shared but powerful computer over the network and use it simultaneously. Each is given a desktop and a personal folder in which they store their files. [ 20 ] With multiseat configuration , session virtualization can be accomplished using a single PC with multiple monitors, keyboards, and mice connected.
Thin clients , which are seen in desktop virtualization, are simple and/or cheap computers that are primarily designed to connect to the network. They may lack significant hard disk storage space , RAM or even processing power , but many organizations are beginning to look at the cost benefits of eliminating "thick client" desktops that are packed with software (and require software licensing fees) and making more strategic investments. [ 23 ]
Desktop virtualization simplifies software versioning and patch management, where the new image is simply updated on the server, and the desktop gets the updated version when it reboots. It also enables centralized control over what applications the user is allowed to have access to on the workstation.
Moving virtualized desktops into the cloud creates hosted virtual desktops (HVDs), in which the desktop images are centrally managed and maintained by a specialist hosting firm. Benefits include scalability and the reduction of capital expenditure, which is replaced by a monthly operational cost. [ 24 ]
Operating-system-level virtualization, also known as containerization , refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances. Such instances, called containers, [ 25 ] partitions, virtual environments (VEs) or jails ( FreeBSD jail or chroot jail ), may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares , CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside a container can only see the container's contents and devices assigned to the container.
This provides many of the benefits that virtual machines have such as standardization and scalability, while using less resources as the kernel is shared between containers. [ 26 ]
Containerization started gaining prominence in 2014, with the introduction of Docker . [ 27 ] [ 28 ]
Virtualization, in particular, full virtualization has proven beneficial for:
A common goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization. With virtualization, several operating systems can be run in parallel on a single central processing unit (CPU). This parallelism tends to reduce overhead costs and differs from multitasking, which involves running several programs on the same OS. Using virtualization, an enterprise can better manage updates and rapid changes to the operating system and applications without disrupting the user. "
Ultimately, virtualization dramatically improves the efficiency and availability of resources and applications in an organization. Instead of relying on the old model of "one server, one application" that leads to underutilized resources, virtual resources are dynamically applied to meet business needs without any excess fat". [ 30 ]
Virtual machines running proprietary operating systems require licensing, regardless of the host machine's operating system. For example, installing Microsoft Windows into a VM guest requires its licensing requirements to be satisfied. [ 31 ] [ 32 ] [ 33 ]
|
https://en.wikipedia.org/wiki/Virtualization
|
Virtualization for aggregation combines physical servers and their memory and CPU power to create a single, large virtual machine . [ 1 ]
Virtualization for aggregation is the opposite of traditional server virtualization , which partitions a single physical system so that multiple OSes can be run on the hardware. The technology is primarily used to run compute-intensive applications on a virtual symmetric multiprocessing (SMP) system, and can benefit users who want to provide very large memory and capacity resources for high-performance computing (HPC) needs without having to invest in proprietary SMP systems, which are beyond the reach of many users.
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virtualization_for_aggregation
|
In topology , an area of mathematics , the virtually Haken conjecture states that every compact , orientable , irreducible three-dimensional manifold with infinite fundamental group is virtually Haken . That is, it has a finite cover (a covering space with a finite-to-one covering map) that is a Haken manifold .
After the proof of the geometrization conjecture by Perelman , the conjecture was only open for hyperbolic 3-manifolds .
The conjecture is usually attributed to Friedhelm Waldhausen in a paper from 1968, [ 1 ] although he did not formally state it. This problem is formally stated as Problem 3.2 in Kirby 's problem list.
A proof of the conjecture was announced on March 12, 2012 by Ian Agol in a seminar lecture he gave at the Institut Henri Poincaré . The proof appeared shortly thereafter in a preprint which was eventually published in Documenta Mathematica . [ 2 ] The proof was obtained via a strategy by previous work of Daniel Wise and collaborators, relying on actions of the fundamental group on certain auxiliary spaces (CAT(0) cube complexes, also known as median graphs ) [ 3 ] It used as an essential ingredient the freshly-obtained solution to the surface subgroup conjecture by Jeremy Kahn and Vladimir Markovic . [ 4 ] [ 5 ] Other results which are directly used in Agol's proof include the Malnormal Special Quotient Theorem of Wise [ 6 ] and a criterion of Nicolas Bergeron and Wise for the cubulation of groups. [ 7 ]
In 2018 related results were obtained by Piotr Przytycki and Daniel Wise proving that mixed 3-manifolds are also virtually special, that is they can be cubulated into a cube complex with a finite cover where all the hyperplanes are embedded which by the previous mentioned work can be made virtually Haken. [ 8 ] [ 9 ]
This topology-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virtually_Haken_conjecture
|
A virtually imaged phased array ( VIPA ) [ 1 ] is an angular dispersive device that, like a prism or a diffraction grating , splits light into its spectral components. The device works almost independently of polarization . In contrast to prisms or regular diffraction gratings, the VIPA has a much higher angular dispersion but has a smaller free spectral range . This aspect is similar to that of an Echelle grating , since it also uses high diffraction orders. To overcome this disadvantage, the VIPA can be combined with a diffraction grating. The VIPA is a compact spectral disperser with high wavelength resolving power.
In a virtually imaged phased array , the phased array is the optical analogue of a phased array antenna at radio frequencies. Unlike a diffraction grating which can be interpreted as a real phased array, in a virtually imaged phased array the phased array is created in a virtual image . More specifically, the optical phased array is virtually formed with multiple virtual images of a light source. This is the fundamental difference from an Echelle grating, where a similar phased array is formed in the real space. The virtual images of a light source in the VIPA are automatically aligned exactly at a constant interval, which is critical for optical interference. This is an advantage of the VIPA over an Echelle grating. When the output light is observed, the virtually imaged phased array works as if light were emitted from a real phased array.
VIPA was proposed and named by Shirasaki in 1996. [ 1 ] Prior to the publication in the paper, a preliminary presentation was given by Shirasaki at a conference. [ 2 ] This presentation was reported in Laser Focus World. [ 3 ] The details of this new approach to producing angular dispersion were described in the patent. [ 4 ] Since then, in the first ten years, the VIPA was of particular interest in the field of optical fiber communication technology. The VIPA was first applied to optical wavelength division multiplexing (WDM) and a wavelength demultiplexer was demonstrated for a channel spacing of 0.8 nm, [ 1 ] which was a standard channel spacing at the time. Later, a much smaller channel separation of 24 pm and a 3 dB bandwidth of 6 pm were achieved by Weiner in 2005 at 1550 nm wavelength range. [ 5 ] For another application, by utilizing the wavelength-dependent length of the light path due to the angular dispersion of the VIPA, the compensation of chromatic dispersion of fibers was studied and demonstrated (Shirasaki, 1997). [ 6 ] [ 7 ] [ 8 ] The compensation was further developed for tunable systems by using adjustable mirrors [ 9 ] [ 10 ] [ 11 ] or a spatial light modulator (Weiner, 2006). [ 12 ] Using the VIPA, compensation of polarization mode dispersion was also achieved (Weiner, 2008). [ 13 ] Furthermore, pulse shaping using the combination of a VIPA for high-resolution wavelength splitting/recombining and a SLM was demonstrated (Weiner, 2010). [ 14 ]
A drawback of the VIPA is its limited free spectral range due to the high diffraction order. To expand the functional wavelength range, Shirasaki combined a VIPA with a regular diffraction grating in 1997 to provide a broadband two-dimensional spectral disperser. [ 15 ] This configuration can be a high performance substitute for diffraction gratings in many grating applications. After the mid 2000s, the two-dimensional VIPA disperser has been used in various fields and devices, such as high-resolution WDM (Weiner, 2004), [ 16 ] a laser frequency comb (Diddams, 2007), [ 17 ] a spectrometer (Nugent-Glandorf, 2012), [ 18 ] astrophysical instruments (Le Coarer, 2017, Bourdarot, 2018, Delboulbé, 2022, and Stacey, 2024), [ 19 ] [ 20 ] [ 21 ] [ 22 ] Brillouin spectroscopy in biomechanics (Scarcelli, 2008, Rosa, 2018, and Margueritat, 2020), [ 23 ] [ 24 ] [ 25 ] other Brillouin spectroscopy (Loubeyre, 2022 and Wu, 2023), [ 26 ] [ 27 ] beam scanning (Ford, 2008), [ 28 ] microscopy (Jalali, 2009), [ 29 ] tomography imaging (Ellerbee, 2014), [ 30 ] metrology (Bhattacharya, 2015), [ 31 ] fiber laser (Xu, 2020), [ 32 ] LiDAR (Fu, 2021), [ 33 ] and surface measurement (Zhu, 2022). [ 34 ]
The main component of a VIPA is a glass plate whose normal is slightly tilted with respect to the input light. One side (light input side) of the glass plate is coated with a 100% reflective mirror and the other side (light output side) is coated with a highly reflective but partially transmissive mirror. The side with the 100% reflective mirror has an anti-reflection coated light entrance area, through which a light beam enters the glass plate. The input light is line-focused to a line (focal line) on the partially transmissive mirror on the light output side. A typical line-focusing lens is a cylindrical lens , which is also part of the VIPA. The light beam is diverging after the beam waist located at the line-focused position.
After the light enters the glass plate through the light entrance area, the light is reflected at the partially transmissive mirror and the 100% reflective mirror, and thus the light travels back and forth between the partially transmissive mirror and the 100% reflective mirror.
It is noted that the glass plate is tilted as a result of its slight rotation where the axis of rotation is the focal line. This rotation/tilt prevents the light from leaving the glass plate out of the light entrance area. Therefore, in order for the optical system to work as a VIPA, there is a critical minimum angle of tilt that allows the light entering through the light entrance area to return only to the 100% reflective mirror. [ 1 ] Below this angle, the function of the VIPA is severely impaired. If the tilting angle were zero, the reflected light from the partially transmissive mirror would travel exactly in reverse and exit the glass plate through the light entrance area without being reflected by the 100% reflective mirror. In the figure, refraction at the surfaces of the glass plate was ignored for simplicity. [ 1 ]
When the light beam is reflected each time at the partially transmissive mirror, a small portion of the light power passes through the mirror and travels away from the glass plate. For a light beam passing through the mirror after multiple reflections, the position of the line-focus can be seen in the virtual image when observed from the light output side. Therefore, this light beam travels as if it originated at a virtual light source located at the position of the line-focus and diverged from the virtual light source. The positions of the virtual light sources for all the transmitted light beams automatically align along the normal to the glass plate with a constant spacing, that is, a number of virtual light sources are superimposed to create an optical phased array. Due to the interference of all the light beams, the phased array emits a collimated light beam in one direction, which is at a wavelength dependent angle, and therefore, an angular dispersion is produced.
Similarly to the resolving power of a diffraction grating, which is determined by the number of the illuminated grating elements and the order of diffraction, the resolving power of a VIPA is determined by the reflectivity of the back surface of the VIPA and the thickness of the glass plate. For a fixed thickness, a high reflectivity causes light to stay longer in the VIPA. This creates more virtual sources of light and thus increases the resolving power. On the other hand, with a lower reflectivity, the light in the VIPA is quickly lost, meaning fewer virtual sources of light are superimposed. This results in lower resolving power.
For large angular dispersion with high resolving power, the dimensions of the VIPA should be accurately controlled. Fine tuning of the VIPA characteristics was demonstrated by developing an elastomer-based structure (Metz, 2013). [ 35 ]
A constant reflectivity of the partially transmissive mirror in the VIPA produces a Lorentzian power distribution when the output light is imaged onto a screen, which has a negative effect on the wavelength selectivity. This can be improved by providing the partially transmissive mirror with a linearly decreasing reflectivity. This leads to a Gaussian -like power distribution on a screen and improves the wavelength selectivity or the resolving power. [ 36 ]
An analytical calculation of the VIPA was first performed by Vega and Weiner in 2003 [ 37 ] based on the theory of plane waves and an improved model based on the Fresnel diffraction theory was developed by Xiao and Weiner in 2004. [ 38 ]
VIPA devices have been commercialized by LightMachinery as spectral disperser devices or components with various customized design parameters.
|
https://en.wikipedia.org/wiki/Virtually_imaged_phased_array
|
A virtually safe dose ( VSD ) may be determined for those carcinogens not assumed to have a threshold. Virtually safe doses are calculated by regulatory agencies to represent the level of exposure to such carcinogenic agents at which an excess of cancers greater than that level accepted by society is not expected. [ 1 ]
This toxicology -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Virtually_safe_dose
|
A virucide (alternatively spelled viricide [ 1 ] ) is any physical or chemical agent that deactivates or destroys viruses. [ 2 ] The substances are not only virucidal but can be also bactericidal , fungicidal , sporicidal or tuberculocidal . [ 3 ]
Virucides are to be used outside the human body, and as such fall into the category of disinfectants (applied not to the human body) and antiseptics (applied to the surface of skin) for those safe enough. Overall, the notion of virucide differs from an antiviral drug such as Aciclovir , which inhibits the proliferation of the virus inside the body. [ 4 ] [ 5 ] [ 6 ]
CDC's Disinfection and Sterilization list of Chemical Disinfectants mentions and discusses substances such as: alcohol, chlorine and chlorine compounds, formaldehyde, glutaraldehyde, hydrogen peroxide, iodophors, ortho-phthalaldehyde (OPA), peracetic acid, peracetic acid and hydrogen peroxide, phenolics, quaternary ammonium compounds, with different, but usually potent microbicidal activity . [ 7 ] [ 8 ] Other inactivating agents such as UV light , metals, and ozone exist. [ 9 ] [ 10 ] [ 11 ] [ 8 ]
According to the Centers for Disease Control and Prevention (CDC), a virucide is "An agent that kills viruses to make them noninfective." [ 12 ]
According to a definition by Robert Koch Institute Germany and further institutions, [ 13 ] virucide means effective against enveloped and non-enveloped viruses. [ 14 ] [ 15 ] [ 9 ]
Due to the complexity of the subject, in Germany, Robert-Koch-Institute introduced sub-definitions such as "limited virucidal" or "limited virucidal plus" (translated from German) to differentiate its meaning further. [ 16 ] [ 17 ]
Note that the meaning of virus inactivation or viral clearance is specific for the medical process industry, i.e., to remove HIV from blood.
Different substances have interactions between microbicides and viruses such as: [ 18 ]
The exact mechanisms, for example of iodine (PVP-I), are still not clear, but it is targeting the bacterial protein synthesis due to disruption of electron transport, DNA denaturation or disruptive effects on the virus membrane. [ 19 ]
The U.S. Centers for Disease Control and Prevention administers a regulatory framework for disinfectants and sterilants. [ 20 ] To earn virucidal registration, extensive data on harder-to-kill viruses demonstrating long-lasting virucidal efficacy need to be provided. [ 21 ] [ 22 ] [ 23 ]
A specific protocol for hand-hygiene testing has been researched and established by microbiologist Prof. Graham Ayliffe . [ 32 ]
Virucides are not intended for use inside the body, [ 33 ] [ 34 ] and most are disinfectants that are not intended for use on the surface of the body. [ 35 ] Most substances are toxic. [ 3 ] None of the listed substances replaces vaccination [ 36 ] [ 37 ] [ 38 ] or antiviral drugs , if available. [ 39 ] [ 40 ] [ 41 ] Virucides are usually labeled with instructions for safe, effective use. [ 42 ] [ 35 ] [ 43 ] [ 44 ] The correct use and scope of disinfectants is very important. [ 45 ] [ 46 ] [ 47 ]
Potential serious side-effects with using "quats" ( Quaternary ammonium compounds ) exist, and over-use "can have a negative impact on your customers' septic systems." [ 48 ]
Mouth-rinsing or gargling can reduce virus load, [ 49 ] however experts warn that "Viruses in the nose, lungs or trachea that are released when speaking, sneezing and coughing are unlikely to be reached because the effect is based on physical accessibility of the surface mucous membrane". [ 50 ]
According to Deutsche Dermatologische Gesellschaft , medical practitioners recommend that disinfectants are gentler on the skin compared to soap-washing. The disinfected hands should then also be creamed to support the regeneration of the skin barrier . Skin care does not reduce the antiseptic effect of the alcoholic disinfectants. [ 51 ] [ 52 ]
The "explosive" use of antibacterial cleansers has led the CDC to monitor substances in adults. [ 53 ]
On April 5, 2021, a Press Briefing by White House COVID-19 Response Team and Public Health Officials mentions that "Cleaning with household cleaners containing soap or detergent will physically remove germs from surfaces. This process does not necessarily kill germs, but reduces the risk of infection by removing them. Disinfecting uses a chemical product, which is a process that kills the germs on the surfaces. In most situations, regular cleaning of surfaces with soap and detergent, not necessarily disinfecting those surfaces, is enough to reduce the risk of COVID-19 spread. Disinfection is only recommended in indoor settings — schools and homes — where there has been a suspected or confirmed case of COVID-19 within the last 24 hours. In most situations, regular cleaning of surfaces with soap and detergent, not necessarily disinfecting those surfaces, is enough to reduce the risk of COVID-19 spread." [ 54 ] [ 55 ]
The CDC issued a special report "Knowledge and Practices Regarding Safe Household Cleaning and Disinfection for COVID-19 Prevention" due to the increased number of calls to poison centers regarding exposures to cleaners and disinfectants since the onset of the COVID-19 pandemic, concluding that "Public messaging should continue to emphasize evidence-based, safe cleaning and disinfection practices to prevent SARS-CoV-2 transmission in households, including hand hygiene and cleaning and disinfection of high-touch surfaces." [ 56 ] [ 57 ]
CDC provides a Guideline for Disinfection and Sterilization in Healthcare Facilities. [ 58 ]
Each mentioned item in the list has different microbicidal activity, i.e. some viruses can be more or less resistant. For example, Poliovirus is resistant to a solution of 3% H 2 O 2 even after a contact time of 10 minutes, [ 59 ] however 7.5% H 2 O 2 takes 30 minutes to inactivate over 99.9% of Poliovirus. [ 7 ] Generally, hydrogen peroxide is considered as a potent virucide in appropriate concentrations, specifically in other forms such as gaseous . [ 18 ]
Another example is povidone-iodine (PVP-I), which is found to be effective against herpes simplex virus [ 60 ] or SARS-CoV-2, [ 61 ] and other viruses, [ 62 ] but coxsackievirus and polio was rather resistant or less sensitive to inactivation. [ 63 ] [ 62 ]
In the beginning of the COVID-19 pandemic, then-US President Donald Trump delivered a very dangerous message to the public on the use of disinfectants, which was immediately rejected and refuted by health professionals. [ 64 ] In essence, and as mentioned above, virucides are usually toxic depending on concentrations, mixture, etc., and can be deadly not just to viruses, but also if inside a human or animal body [ 65 ] or on surface of body. [ 66 ]
With regards to the COVID-19 pandemic, some of the mentioned agents are still under research about their microbicidal activity and effectivity against SARS-CoV-2, e.g., on surfaces, [ 67 ] [ 68 ] as mouth-washes, [ 69 ] hand-washing, [ 70 ] etc.
A mixture of 62–71% ethanol, 0.5% hydrogen peroxide or 0.1% sodium hypochlorite is found to be able to deactivate the novel Coronavirus on surfaces within 1 minute. [ 71 ]
A 2020 systematic review on hydrogen peroxide (H 2 O 2 ) mouth-washes concludes, that they don't have an effect on virucidal activity, recommending that "dental care protocols during the COVID-19 pandemic should be revised." [ 72 ] Additional research with relation to the Coronavirus virucidal efficacy is on-going. [ 73 ] [ 69 ] [ 74 ]
Various information and overview of light-based strategies ( UV-C and other types of light sources; see also Ultraviolet germicidal irradiation ) to combat the COVID-19 pandemic are available. [ 75 ] [ 76 ] [ 77 ] [ 78 ]
As systematic review of 16 studies by Cochrane on Antimicrobial mouthwashes (gargling) and nasal sprays concludes that "there is currently no evidence relating to the benefits and risks of patients with COVID‐19 using antimicrobial mouthwashes or nasal sprays." [ 79 ]
Treatment of SARS-CoV for 2 min with Isodine ( PVP-I ) is found to strongly reduce the virus infectivity. [ 80 ]
The International Society of Antimicrobial Chemotherapy (ISAC) is one of the major umbrella organizations for education, research and development in the area of therapy of infections. Its members are national organizations, currently 86 and over 50,000 individual members. [ 81 ]
Note that many of the substances, if sold commercially, are usually combinations and mixtures with varying molecular contents. Also note that most products have a limited viricide efficacy. [ 82 ] A specific test-protocol is applied. [ 83 ] The lists' scope is limited. For further products refer to other lists. [ 84 ] [ 85 ] [ 3 ] Other factors such as stability of the concentrate, application concentration, exposure time, timing of the solution, hydrogen ion concentration (pH value), temperature, etc. play an certain role for the effectivity of a virucide. [ 8 ]
EPA is providing a public listing called "List N" [ 86 ] [ 87 ]
|
https://en.wikipedia.org/wiki/Virucide
|
Virulence is a pathogen 's or microorganism 's ability to cause damage to a host.
In most cases, especially in animal systems, virulence refers to the degree of damage caused by a microbe to its host . [ 1 ] The pathogenicity of an organism—its ability to cause disease —is determined by its virulence factors . [ 2 ] [ 3 ] In the specific context of gene for gene systems, often in plants, virulence refers to a pathogen's ability to infect a resistant host. [ 4 ] Virulence can also be transferred using a plasmid .
The noun virulence ( Latin noun virulentia ) derives from the adjective virulent , meaning disease severity. [ 5 ] The word virulent derives from the Latin word virulentus , meaning "a poisoned wound" or "full of poison". [ 5 ] [ 6 ] The term virulence does not only apply to viruses.
From an ecological standpoint, virulence is the loss of fitness induced by a parasite upon its host. Virulence can be understood in terms of proximate causes —those specific traits of the pathogen that help make the host ill—and ultimate causes —the evolutionary pressures that lead to virulent traits occurring in a pathogen strain. [ 7 ]
The ability of bacteria to cause disease is described in terms of the number of infecting bacteria, the route of entry into the body, the effects of host defense mechanisms, and intrinsic characteristics of the bacteria called virulence factors . Many virulence factors are so-called effector proteins that are injected into the host cells by specialized secretion apparati, such as the type three secretion system . Host-mediated pathogenesis is often important because the host can respond aggressively to infection with the result that host defense mechanisms do damage to host tissues while the infection is being countered (e.g., cytokine storm ). [ citation needed ]
The virulence factors of bacteria are typically proteins or other molecules that are synthesized by enzymes . These proteins are coded for by genes in chromosomal DNA, bacteriophage DNA or plasmids . Certain bacteria employ mobile genetic elements and horizontal gene transfer . Therefore, strategies to combat certain bacterial infections by targeting these specific virulence factors and mobile genetic elements have been proposed. [ 8 ] Bacteria use quorum sensing to synchronise release of the molecules. These are all proximate causes of morbidity in the host. [ citation needed ]
Virus virulence factors allow it to replicate, modify host defenses, and spread within the host, and they are toxic to the host. [ 9 ]
They determine whether infection occurs and how severe the resulting viral disease symptoms are. Viruses often require receptor proteins on host cells to which they specifically bind. Typically, these host cell proteins are endocytosed and the bound virus then enters the host cell. Virulent viruses such as HIV , which causes AIDS , have mechanisms for evading host defenses. HIV infects T-helper cells , which leads to a reduction of the adaptive immune response of the host and eventually leads to an immunocompromised state. Death results from opportunistic infections secondary to disruption of the immune system caused by AIDS. Some viral virulence factors confer ability to replicate during the defensive inflammation responses of the host such as during virus-induced fever . Many viruses can exist inside a host for long periods during which little damage is done. Extremely virulent strains can eventually evolve by mutation and natural selection within the virus population inside a host. The term " neurovirulent " is used for viruses such as rabies and herpes simplex which can invade the nervous system and cause disease there. [ citation needed ]
Extensively studied model organisms of virulent viruses include virus T4 and other T-even bacteriophages which infect Escherichia coli and a number of related bacteria . [ citation needed ]
The lytic life cycle of virulent bacteriophages is contrasted by the temperate lifecycle of temperate bacteriophages. [ 10 ] [ 11 ]
|
https://en.wikipedia.org/wiki/Virulence
|
Virus classification is the process of naming viruses and placing them into a taxonomic system similar to the classification systems used for cellular organisms .
Viruses are classified by phenotypic characteristics, such as morphology , nucleic acid type, mode of replication, host organisms , and the type of disease they cause. The formal taxonomic classification of viruses is the responsibility of the International Committee on Taxonomy of Viruses (ICTV) system, although the Baltimore classification system can be used to place viruses into one of seven groups based on their manner of mRNA synthesis. Specific naming conventions and further classification guidelines are set out by the ICTV.
In 2021, the ICTV changed the International Code of Virus Classification and Nomenclature (ICVCN) to mandate a binomial format (genus|| ||species) for naming new viral species similar to that used for cellular organisms; the names of species coined prior to 2021 are gradually being converted to the new format, a process planned for completion by the end of 2023. [ needs update ]
As of 2022, the ICTV taxonomy listed 11,273 named virus species (including some classed as satellite viruses and others as viroids) in 2,818 genera, 264 families, 72 orders, 40 classes, 17 phyla, 9 kingdoms and 6 realms. [ 1 ] However, the number of named viruses considerably exceeds the number of named virus species since, by contrast to the classification systems used elsewhere in biology, a virus "species" is a collective name for a group of (presumably related) viruses sharing certain common features (see below). Also, the use of the term "kingdom" in virology does not equate to its usage in other biological groups, where it reflects high level groupings that separate completely different kinds of organisms (see Kingdom (biology) ).
The currently accepted and formal definition of a 'virus' was accepted by the ICTV Executive Committee in November 2020 and ratified in March 2021, and is as follows: [ 2 ]
Viruses sensu stricto are defined operationally by the ICTV as a type of MGE that encodes at least one protein that is a major component of the virion encasing the nucleic acid of the respective MGE and therefore the gene encoding the major virion protein itself or MGEs that are clearly demonstrable to be members of a line of evolutionary descent of such major virion protein-encoding entities. Any monophyletic group of MGEs that originates from a virion protein-encoding ancestor should be classified as a group of viruses.
Species form the basis for any biological classification system. Before 1982, it was thought that viruses could not be made to fit Ernst Mayr 's reproductive concept of species, and so were not amenable to such treatment. In 1982, the ICTV started to define a species as "a cluster of strains" with unique identifying qualities. In 1991, the more specific principle that a virus species is a polythetic class of viruses that constitutes a replicating lineage and occupies a particular ecological niche was adopted. [ 3 ]
As at 2021 (the latest edition of the ICVCN), the ICTV definition of species states: "A species is the lowest taxonomic level in the hierarchy approved by the ICTV. A species is a monophyletic group of MGEs ( mobile genetic elements ) whose properties can be distinguished from those of other species by multiple criteria", with the comment "The criteria by which different species within a genus are distinguished shall be established by the appropriate Study Group. These criteria may include, but are not limited to, natural and experimental host range, cell and tissue tropism, pathogenicity, vector specificity, antigenicity, and the degree of relatedness of their genomes or genes. The criteria used should be published in the relevant section of the ICTV Report and reviewed periodically by the appropriate Study Group." [ 4 ]
Many individually named viruses (sometimes referred to as "virus strains") exist at below the rank of virus species . The ICVCN gives the examples of blackeye cowpea mosaic virus and peanut stripe virus, which are both classified in the species Bean common mosaic virus , the latter a member of the genus Potyvirus that will in due course receive a binomial name as Potyvirus [species...] . As another example, the virus SARS-CoV-1 , that causes severe acute respiratory syndrome ( SARS ) is different from the virus SARS-CoV-2 , the cause of the COVID-19 pandemic, but both are classified within the same virus species, a member of the genus Betacoronavirus that is currently known as Severe acute respiratory syndrome-related coronavirus which, per the 2021 mandate from the ICTV, will also receive a binomial name in due course. As set out in the ICVCN, section 3.4, the names [and definitions] of taxa below the rank of species are not governed by the ICTV; "Naming of such entities is not the responsibility of the ICTV but of international specialty groups. It is the responsibility of ICTV Study Groups to consider how these entities may best be classified into species." [ 4 ] Using the example given above, the virus causing the COVID-19 pandemic was given the designation "SARS-CoV-2" by the Coronaviridae Study Group (CSG) of the International Committee on Taxonomy of Viruses in 2020; in the same publication, this Study Group recommended a naming convention for particular isolates of this virus "resembl[ing] the formats used for isolates of avian coronaviruses, filoviruses and influenza virus" in the format virus/host/location/isolate/date, with a cited example as "SARS-CoV-2/human/Wuhan/X1/2019". [ 5 ]
The International Committee on Taxonomy of Viruses began to devise and implement rules for the naming and classification of viruses early in the 1970s, an effort that continues to the present. The ICTV is the only body charged by the International Union of Microbiological Societies with the task of developing, refining, and maintaining a universal virus taxonomy, following the methods set out in the International Code of Virus Classification and Nomenclature. [ 4 ] [ 6 ] The system shares many features with the classification system of cellular organisms , such as taxon structure. However, some differences exist, such as the universal use of italics for all taxonomic names, unlike in the International Code of Nomenclature for algae, fungi, and plants and International Code of Zoological Nomenclature .
Viral classification starts at the level of realm and continues as follows, with the taxonomic suffixes in parentheses: [ 4 ]
In parallel to the system of binomial nomenclature adopted in cellular species, the ICTV has recently (2021) mandated that new virus species be named using a binomial format ( Genus species , e.g. Betacoronavirus pandemicum ), and that pre-existing virus species names be progressively replaced with new names in the binomial format. [ 7 ] A mid-2023 review of the status of this changeover stated: "...a large number of proposals [concerning virus nomenclature, submitted to the ICTV Executive Committee (EC) for its consideration] renamed existing species for compliance with the recently mandated binomial nomenclature format. As a result, 8,982 out of the current 11,273 species (80%) now have binomial names. The process will be concluded in 2023, with the remaining 2,291 species being renamed." [ 8 ]
As of 2025, all levels of taxa except subrealm, subkingdom, and subclass are used. Seven realms, one incertae sedis class, 25 incertae sedis families, and two incertae sedis genera are recognized: [ 9 ]
Realms :
Incertae sedis classes :
Incertae sedis families :
Incertae sedis genera :
It has been suggested that similarity in virion assembly and structure observed for certain viral groups infecting hosts from different domains of life (e.g., bacterial tectiviruses and eukaryotic adenoviruses or prokaryotic Caudovirales and eukaryotic herpesviruses) reflects an evolutionary relationship between these viruses. [ 10 ] Therefore, structural relationship between viruses has been suggested to be used as a basis for defining higher-level taxa – structure-based viral lineages – that could complement the ICTV classification scheme of 2010. [ 11 ]
The ICTV has gradually added many higher-level taxa using relationships in protein folds. All four realms defined in the 2019 release are defined by the presence of a protein of a certain structural family. [ 12 ]
Baltimore classification (first defined in 1971) is a classification system that places viruses into one of seven groups depending on a combination of their nucleic acid ( DNA or RNA ), strandedness (single-stranded or double-stranded), sense , and method of replication . [ 13 ] Named after David Baltimore , a Nobel Prize -winning biologist, these groups are designated by Roman numerals . Other classifications are determined by the disease caused by the virus or its morphology, neither of which are satisfactory due to different viruses either causing the same disease or looking very similar. In addition, viral structures are often difficult to determine under the microscope. Classifying viruses according to their genome means that those in a given category will all behave in a similar fashion, offering some indication of how to proceed with further research. Viruses can be placed in one of the seven following groups: [ 14 ]
Viruses with a DNA genome , except for the DNA reverse transcribing viruses , are members of three of the four recognized viral realms : Duplodnaviria , Monodnaviria , and Varidnaviria . But the incertae sedis order Ligamenvirales , and many other incertae sedis families and genera, are also used to classify DNA viruses. The domains Duplodnaviria and Varidnaviria consist of double-stranded DNA viruses; other double-stranded DNA viruses are incertae sedis . The domain Monodnaviria consists of single-stranded DNA viruses that generally encode a HUH endonuclease ; other single-stranded DNA viruses are incertae sedis . [ 15 ]
All viruses that have an RNA genome , and that encode an RNA-dependent RNA polymerase (RdRp), are members of the kingdom Orthornavirae , within the realm Riboviria . [ 16 ]
All viruses that encode a reverse transcriptase (also known as RT or RNA-dependent DNA polymerase) are members of the class Revtraviricetes , within the phylum Arterviricota , kingdom Pararnavirae , and realm Riboviria . The class Blubervirales contains the single family Hepadnaviridae of DNA RT (reverse transcribing) viruses; all other RT viruses are members of the class Ortervirales . [ 17 ]
Holmes (1948) used a Linnaean taxonomy with binomial nomenclature to classify viruses into 3 groups under one order, Virales . They are placed as follows:
The system was not accepted by others due to its neglect of morphological similarities. [ 18 ]
Infectious agents are smaller than viruses and have only some of their properties. [ 19 ] [ 20 ] Since 2015, the ICTV has allowed them to be classified in a similar way as viruses are. [ 21 ]
Satellites depend on co-infection of a host cell with a helper virus for productive multiplication. Their nucleic acids have substantially distinct nucleotide sequences from either their helper virus or host. When a satellite subviral agent encodes the coat protein in which it is encapsulated, it is then called a satellite virus.
Satellite-like nucleic acids resemble satellite nucleic acids, in that they replicate with the aid of helper viruses. However they differ in that they can encode functions that can contribute to the success of their helper viruses; while they are sometimes considered to be genomic elements of their helper viruses, they are not always found within their helper viruses. [ 19 ]
Defective interfering particles are defective viruses that have lost their ability to replicate except in the presence of a helper virus, which is normally the parental virus. They can also interfere with the helper virus.
Viriforms are a polyphyletic category of endogenous viral elements . Sometime in their evolution, they became "domesticated" by their host as a key part of the host's lifecycle. The prototypical example is members of the (also polyphyletic) Polydnaviriformidae , which are used by wasps to send pieces of immunity-blunting DNA into the prey by packing them into virion-like particles . Other members are so-called gene transfer agents (GTAs) found among prokaryotes. GTA particles resemble tailed phages , but are smaller and carry mostly random pieces of host DNA. GTAs are produced by the host in times of stress; releasing GTAs kills the host cell, but allows pieces of its genetic material to live on in other bacteria, usually of the same species. [ 25 ] The three known clades of GTAs, Rhodogtaviriformidae , Bartogtaviriformidae , and Brachygtaviriformidae , all arose independently from different parts of the Caudoviricetes family tree. [ 26 ]
|
https://en.wikipedia.org/wiki/Virus_classification
|
Virus latency (or viral latency ) is the ability of a pathogenic virus to lie dormant ( latent ) within a cell, denoted as the lysogenic part of the viral life cycle. [ 1 ] A latent viral infection is a type of persistent viral infection which is distinguished from a chronic viral infection. Latency is the phase in certain viruses' life cycles in which, after initial infection, proliferation of virus particles ceases. However, the viral genome is not eradicated. The virus can reactivate and begin producing large amounts of viral progeny (the lytic part of the viral life cycle ) without the host becoming reinfected by new outside virus, and stays within the host indefinitely. [ 2 ]
Virus latency is not to be confused with clinical latency during the incubation period when a virus is not dormant.
Episomal latency refers to the use of genetic episomes during latency. In this latency type, viral genes are stabilized, floating in the cytoplasm or nucleus as distinct objects, either as linear or lasso -shaped structures. Episomal latency is more vulnerable to ribozymes or host foreign gene degradation than proviral latency (see below).
One example is the herpes virus family, Herpesviridae , all of which establish latent infection. Herpes virus include chicken-pox virus and herpes simplex viruses (HSV-1, HSV-2), all of which establish episomal latency in neurons and leave linear genetic material floating in the cytoplasm . [ 3 ]
The Gammaherpesvirinae subfamily is associated with episomal latency established in cells of the immune system , such as B-cells in the case of Epstein–Barr virus . [ 3 ] [ 4 ] Epstein–Barr virus lytic reactivation (which can be due to chemotherapy or radiation) can result in genome instability and cancer . [ 5 ]
In the case of herpes simplex (HSV), the virus has been shown to fuse with DNA in neurons, such as nerve ganglia [ 6 ] or neurons, and HSV reactivates upon even minor chromatin loosening with stress, [ 7 ] although the chromatin compacts (becomes latent) upon oxygen and nutrient deprivation. [ 8 ]
Cytomegalovirus (CMV) establishes latency in myeloid progenitor cells , and is reactivated by inflammation . [ 9 ] Immunosuppression and critical illness ( sepsis in particular) often results in CMV reactivation. [ 10 ] CMV reactivation is commonly seen in patients with severe colitis . [ 11 ]
Advantages of episomal latency include the fact that the virus may not need to enter the cell nucleus , and hence may avoid nuclear domain 10 (ND10) from activating interferon via that pathway.
Disadvantages include more exposure to cellular defenses, leading to possible degradation of viral gene via cellular enzymes . [ 12 ]
Reactivation may be due to stress, UV light , etc. [ 13 ]
A provirus is a virus genome that is integrated into the DNA of a host cell.
Advantages include automatic host cell division results in replication of the virus's genes, and the fact that it is nearly impossible to remove an integrated provirus from an infected cell without killing the cell . [ 14 ]
A disadvantage of this method is the need to enter the nucleus (and the need for packaging proteins that will allow for that). However, viruses that integrate into the host cell's genome can stay there as long as the cell lives.
One of the best-studied viruses that exhibits viral latency is HIV . HIV uses reverse transcriptase to create a DNA copy of its RNA genome. HIV latency allows the virus to largely avoid the immune system. Like other viruses that go latent, it does not typically cause symptoms while latent. HIV in proviral latency is nearly impossible to target with antiretroviral drugs. Several classes of latency reversing agents (LRAs) are under development for possible use in shock-and-kill strategies in which the latently infected cellular reservoirs would be reactivated (the shock) so that anti-viral treatment could take effect (the kill). [ 15 ]
Both proviral and episomal latency may require maintenance for continued infection and fidelity of viral genes. Latency is generally maintained by viral genes expressed primarily during latency. Expression of these latency-associated genes may function to keep the viral genome from being digested by cellular ribozymes or being found out by the immune system . Certain viral gene products ( RNA transcripts such as non-coding RNAs and proteins) may also inhibit apoptosis or induce cell growth and division to allow more copies of the infected cell to be produced. [ 16 ]
An example of such a gene product is the latency associated transcripts (LAT) in herpes simplex virus, which interfere with apoptosis by downregulating a number of host factors, including major histocompatibility complex (MHC) and inhibiting the apoptotic pathway. [ 17 ]
A certain type of latency could be ascribed to the endogenous retroviruses . These viruses have incorporated into the human genome in the distant past, and are now transmitted through reproduction. Generally these types of viruses have become highly evolved, and have lost the expression of many gene products. [ 18 ] Some of the proteins expressed by these viruses have co-evolved with host cells to play important roles in normal processes. [ 19 ]
While viral latency exhibits no active viral shedding nor causes any pathologies or symptoms , the virus is still able to reactivate via external activators (sunlight, stress, etc.) to cause an acute infection. In the case of herpes simplex virus, which generally infects an individual for life, a serotype of the virus reactivates occasionally to cause cold sores . Although the sores are quickly resolved by the immune system, they may be a minor annoyance from time to time. In the case of varicella zoster virus , after an initial acute infection ( chickenpox ) the virus lies dormant until reactivated as herpes zoster .
More serious ramifications of a latent infection could be the possibility of transforming the cell, and forcing the cell into uncontrolled cell division . This is a result of the random insertion of the viral genome into the host's own gene and expression of host cellular growth factors for the benefit of the virus. In a notable event, this actually happened during gene therapy through the use of retroviral vectors at the Necker Hospital in Paris , where twenty young boys received treatment for a genetic disorder , after which five developed leukemia -like syndromes. [ 20 ]
This is also seen with infections of the human papilloma virus in which persistent infection may lead to cervical cancer as a result of cellular transformation . [ 21 ] [ 22 ] [ 23 ]
In the field of HIV research, proviral latency in specific long-lived cell types is the basis for the concept of one or more viral reservoirs, referring to locations (cell types or tissues) characterized by persistence of latent virus. Specifically, the presence of replication-competent HIV in resting CD4-positive T cells allows this virus to persist for years without evolving despite prolonged exposure to antiretroviral drugs. [ 24 ] This latent reservoir of HIV may explain the inability of antiretroviral treatment to cure HIV infection. [ 24 ] [ 25 ] [ 26 ] [ 27 ]
|
https://en.wikipedia.org/wiki/Virus_latency
|
Virus nanotechnology is the use of viruses as a source of nanoparticles for biomedical purposes.
Viruses are made up of a genome and a capsid ; and some viruses are enveloped. Most virus capsids measure between 20-500 nm in diameter. Because of their nanometer size dimensions, viruses have been considered as naturally occurring nanoparticles. Virus nanoparticles have been subject to the nanoscience and nanoengineering disciplines. Viruses can be regarded as prefabricated nanoparticles . Many different viruses have been studied for various applications in nanotechnology : for example, mammalian viruses are being developed as vectors for gene delivery, and bacteriophages and plant viruses have been used in drug delivery and imaging applications as well as in vaccines and immunotherapy intervention. [ 1 ]
Virus nanotechnology is one of the very promising and emerging disciplines in nanotechnology. A highly interdisciplinary field, viral nanotechnology occupies the interface between virology, biotechnology, chemistry, and materials science. The fields employs viral nanoparticles (VNPs) and its counterparts of virus-like nanoparticles (VLPs) for potential applications in the diverse fields of electronics, sensors, and most significantly at clinical field. [ 2 ] VNPs and VLPs are attractive building blocks for several reasons. Both particles are on the nanometer-size scale; they are monodisperse with a high degree of symmetry and polyvalency; they can be produced with ease on large scale; they are exceptionally stable and robust, and they are biocompatible, and in some cases, orally bioavailable. [ 3 ] They are "programmable" units that can be modified by either genetic modification or chemical bioconjugation methods. [ 4 ] [ 5 ]
Nanotechnology is the manipulation or self-assembly of individual atoms, molecules, or, molecular clusters into structures to create materials and devices with new or vastly different properties. Nanotechnology can work from the top down (which means reducing the size of the smallest structures to the nanoscale) or bottom up (which involves manipulating individual atoms and molecules into nanostructures) .The definition of nanotechnology is based on the prefix "nano" which is from the Greek word meaning "dwarf". In more technical terms, the word "nano" means 10 −9 , or one billionth of something. For a meaningful comparison, a virus is roughly 100 nanometers (nm) in size. So that a virus can also call as a nanoparticle. The word nanotechnology is generally used when referring to materials with the size of 0.1 to 100 nanometres, however, it is also inherent that these materials should display different properties from bulk (or micrometric and larger) materials as a result of their size. [ 2 ] These differences include physical strength, chemical reactivity, electrical conductance, magnetism and optical effects.
Nanotechnology has an almost limitless string of applications in biology, biotechnology, and biomedicine. [ 6 ] Nanotechnology has engendered a growing sense of excitement due to the ability to produce and utilize materials, devices, and systems through the control of matter on the nanometer scale (1 to 50 nm). This bottom-up approach requires less material and causes less pollution. Nanotechnology has had several commercial applications in advanced laser technology, hard coatings, photography, pharmaceuticals, printing, chemical-mechanical polishing, and cosmetics. [ 7 ] Soon, there will be lighter cars using nanoparticle reinforced polymers, orally applicable insulin, artificial joints made from nanoparticulate materials, and low-calorie foods with nanoparticulate taste enhancers. [ 8 ]
Viruses have long been studied as deadly pathogens to cause disease in all living forms. [ 9 ] By the 1950s, researchers had begun thinking of viruses as tools in addition of pathogens. Bacteriophage genomes and components of the protein expression machinery have been widely utilized as tools for understanding the fundamental cellular process. On the basis of these studies, several viruses have been exploited as expression systems in biotechnology. Later in the 1970s, viruses are used as a vector for the benefit of humans. [ 10 ] Since that, often viruses are used as vectors for gene therapy, cancer control and control of harmful or damaging organisms, in both agriculture and medicine. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ]
Recently, a new approach to exploiting viruses and their capsids for biotechnology began to change toward using them for nanotechnology application. Researchers Douglas and Young (Montana State University, Bozeman, MT, USA) were the first to consider the utility of a virus capsid as a nanomaterial. [ 16 ] They have taken plant virus Cowpea Chlorotic Mottle Virus (CCMV) for their study. CCMV showed a highly dynamic platform with pH and metal ion dependent structural transitions. Douglas and Young made use of these capsid dynamics and exchanged the natural cargo (nucleic acid) with synthetic materials. Since then many materials have been encapsulated into CCMV and other VNPs. At about the same time, the research team led by Mann (University of Bristol, UK) pioneered a new area using the rod-shaped particles of TMV (Tobacco Mosaic Virus). The particles were used as templates for the fabrication of a range of metallized nanotube structures using mineralization techniques. [ 17 ] TMV particles have also been utilized to generate various structures (nanotubes and nanowires) for use in batteries and data storage devices. [ 18 ] [ 19 ]
Viral capsids have attracted great interest in the field of nanobiology because of their nanoscale size, symmetrical structural organization, load capacity, controllable self-assembly, and ease of modification. viruses are essentially naturally occurring nanomaterials capable of self-assembly with a high degree of precision. [ 4 ] Viral capsid- nanoparticle hybrid structures, which combine the bio-activities of virus capsids with the functions of nanoparticles, are a new class of bionanomaterials that have many potential applications as therapeutic and diagnostic vectors, imaging agents, and advanced nanomaterial synthesis reactors. [ 4 ]
Plant virus-based systems, in particular, are among the most advanced and exploited for their potential use as bioinspired structured nanomaterials and nano-vectors. Plant virus nanoparticles are non-infectious to mammalian cells also proved by Raja muthuramalingam et al. 2018. [ 20 ] Plant viruses have a size particularly suitable for nanoscale applications and can offer several advantages. In fact, they are structurally uniform, robust, biodegradable and easy to produce. [ 4 ] Moreover, many are the examples regarding functionalization of plant virus-based nanoparticles by means of modification of their external surface and by loading cargo molecules into their internal cavity. This plasticity in terms of nanoparticles engineering is the ground on which multivalency, payload containment and targeted delivery can be fully exploited. [ 21 ]
George P. Lomonossoff writing in "Recent Advances in Plant Virology",
The capsids of most plant viruses are simple and robust structures consisting of multiple copies of one or a few types of protein subunit arranged with either icosahedral or helical symmetry. The capsids can be produced in large quantities either by the infection of plants or by the expression of the subunit(s) in a variety of heterologous systems. In view of their relative simplicity and ease of production, plant virus particles or virus-like particles (VLPs) have attracted much interest over the past 20 years for applications in both bio- and nanotechnology [Lomonossoff, 2011 [ 22 ] ]. As result, plant virus particles have been subjected to both genetic and chemical modification, have been used to encapsulate foreign material and have themselves, been incorporated into supramolecular structures. Significantly, plant viruses studied are not human pathogens, which have no natural tendency to interact with human cell surface receptors. [ 23 ] Recently, a plant pathogenic virus was reportedly used to synthesize a noble hybrid metal nanomaterials used as bio-semiconductor. [ 20 ]
Viruses cause several destructive plant diseases and are accountable for massive losses in crop production and quality in all parts of the world. Infected plants may show a range of symptoms depending on the disease but often there is severe leaf curling, stunting (abnormalities in the whole plant) and leaf yellowing (either of the whole leaf or in a pattern of stripes or blotches). [ 24 ] Most plant viruses are therefore transmitted by a vector organism (insects, nematodes, plasmodiophorids and mites) that feeds on the plant or (in some diseases) are introduced through wounds made, for example during agriculture practices (e.g. pruning). Many plant viruses, for example, tobacco mosaic virus, have been used as model systems to provide a basic understanding of how viruses express genes and replicate. Others permitted the elucidation of the processes underlying RNA silencing, now recognised as a core epigenetic mechanism underpinning numerous areas of biology. [ 25 ]
Manifold plant virus platform technologies are being developed and studied for many applications [ 1 ] including:
|
https://en.wikipedia.org/wiki/Virus_nanotechnology
|
Virus quantification is counting or calculating the number of virus particles (virions) in a sample to determine the virus concentration. It is used in both research and development (R&D) in academic and commercial laboratories as well as in production situations where the quantity of virus at various steps is an important variable that must be monitored. For example, the production of virus-based vaccines , recombinant proteins using viral vectors, and viral antigens all require virus quantification to continually monitor and/or modify the process in order to optimize product quality and production yields and to respond to ever changing demands and applications. Other examples of specific instances where viruses need to be quantified include clone screening, multiplicity of infection (MOI) optimization, and adaptation of methods to cell culture .
There are many ways to categorize virus quantification methods. Here, the methods are grouped according to what is being measured and in what biological context. For example, cell-based assays typically measure infectious units (active virus). Other methods may measure the concentration of viral proteins, DNA, RNA, or molecular particles, but not necessarily measure infectivity. Each method has its own advantages and disadvantages, which often determines which method is used for specific applications. [ 1 ]
Plaque -based assays are a commonly used method to determine virus concentration in terms of infectious dose . Plaque assays determine the number of plaque forming units (PFU) in a virus sample, which is one measure of virus quantity. This assay is based on a microbiological method conducted in petri dishes or multi-well cell culture plates. Specifically, a confluent monolayer of host cells is infected by applying a sample containing the virus at varying dilutions and then covered with a semi-solid medium , such as agar or carboxymethyl cellulose , to prevent the virus infection from spreading indiscriminately, as would occur in a liquid medium. A viral plaque is formed after a virus infects a cell within the fixed cell monolayer. [ 2 ] The virus-infected cell will lyse and spread the infection to adjacent cells, where the infection-to-lysis cycle is repeated. This will create an area of infected, lysed cells (viral plaque) surrounded by uninfected, intact cells. The plaque can be seen with an optical microscope or visually using cell staining techniques (e.g., staining with a crystal violet solution to visualize intact vs. lysed cells). [ 3 ] Plaque formation can take 3–14 days, depending on the virus being analyzed. Plaques are generally counted manually, and the plaque count, in combination with the dilution factor of the infection solution (the sample initially applied to the cells), is used to calculate the number of plaque forming units per sample unit volume (PFU/mL). The PFU/mL number represents the concentration of infectious virus particles within the sample and is based on the assumption that each plaque formed is representative of an initial infection by one infectious virus particle. [ 4 ] [ 5 ]
The focus forming assay (FFA) is a variation of the plaque assay, but instead of depending on cell lysis in order to detect plaque formation, the FFA employs immunostaining techniques using fluorescently labeled antibodies specific for a viral antigen to detect infected host cells and infectious virus particles before an actual plaque is formed. The FFA is particularly useful for quantifying classes of viruses that do not lyse the cell membranes, as these viruses would not be amenable to the plaque assay. Like the plaque assay, host cell monolayers are infected with various dilutions of the virus sample and allowed to incubate for a relatively brief incubation period (e.g., 24–72 hours) under a semisolid overlay medium that restricts the spread of infectious virus, creating localized clusters (foci) of infected cells. Plates are subsequently probed with fluorescently labeled antibodies against a viral antigen, and fluorescence microscopy is used to count and quantify the number of foci. The FFA method typically yields results in less time than plaque assays or fifty-percent-tissue-culture-infective-dose (TCID 50 ) assays (see below), but it can be more expensive in terms of required reagents and equipment. Assay completion time is also dependent on the size of area that the user is counting. A larger area will require more time but can provide a more accurate representation of the sample. Results of the FFA are expressed as focus forming units per milliliter, or FFU/mL. [ 6 ]
The TCID 50 (50% tissue culture infectious dose) assay is the measure of infectious virus titer . This endpoint dilution assay quantifies the amount of virus required to kill 50% of infected hosts or to produce a cytopathic effect in 50% of inoculated tissue culture cells. This assay may be more common in clinical research applications where the lethal dose of virus must be determined or if the virus does not form plaques. [ citation needed ] When used in the context of tissue culture, host cells are plated and serial dilutions of the virus are added. After incubation, the percentage of cell death (i.e. infected cells) is manually observed and recorded for each virus dilution, and results are used to mathematically calculate a TCID 50 result. [ 6 ] [ 7 ] Due to distinct differences in assay methods and principles, TCID 50 and pfu/mL or other infectivity assay results are not equivalent. This method can take up to a week due to cell infectivity time. [ 8 ]
Two methods commonly used to calculate TCID 50 (can also be used to calculate other types of 50% endpoint such EC50 , IC50 , and LD50 ) are:
The theoretical relationship between TCID 50 and PFU is approximately 0.69 PFU = 1 TCID 50 based on the Poisson distribution , [ 10 ] a probability distribution which describes how many random events (virus particles) occurring at a known average rate (virus titer) are likely to occur in a fixed space (the amount of virus medium in a well). However, it must be emphasized that in practice, this relationship may not hold even for the same virus + cell combination, as the two types of assay are set up differently and virus infectivity is very sensitive to various factors such as cell age, overlay media, etc.
But the following reference defines the relationship differently:
From ATTC : "Assuming that the same cell system is used, that the virus forms plaques on those cells, and that no procedures are added which would inhibit plaque formation, 1 mL of virus stock would be expected to have about half of the number of plaque forming units (PFUs) as TCID 50 . This is only an estimate but is based on the rationale that the limiting dilution which would infect 50% of the cell layers challenged would often be expected to initially produce a single plaque in the cell layers which become infected. In some instances, two or more plaques might by chance form, and thus the actual number of PFUs should be determined experimentally.
"Mathematically, the expected PFUs would be somewhat greater than one-half the TCID 50 , since the negative tubes in the TCID 50 represent zero plaque forming units and the positive tubes each represent one or more plaque forming units. A more precise estimate is obtained by applying the Poisson distribution. Where P ( o ) {\displaystyle P(o)} is the proportion of negative tubes and m is the mean number of infectious units per volume (PFU/ml), P ( o ) = exp ( − m ) {\displaystyle P(o)=\exp(-m)} . For any titer expressed as a TCID 50 , P ( o ) = 0.5 {\displaystyle P(o)=0.5} . Thus exp ( − m ) = 0.5 {\displaystyle \exp(-m)=0.5} and m = − ln 0.5 {\displaystyle m=-\ln 0.5} which is ~ 0.7.
"Therefore, one could multiply the TCID 50 titer (per ml) by 0.7 to predict the mean number of PFU/ml. When actually applying such calculations, remember the calculated mean will only be valid if the changes in protocol required to visualize plaques do not alter the expression of infectious virus as compared with expression under conditions employed for TCID 50 .
"Thus as a working estimate, one can assume material with a TCID 50 of 1 × 10 5 TCID 50 /mL will produce 0.7 × 10 5 PFUs/mL." [ 11 ]
There are several variations of protein- and antibody-based virus quantification assays. In general, these methods quantify either the amount of all protein or the amount of a specific virus protein in the sample rather than the number of infected cells or virus particles. Quantification commonly relies on colorimetric or fluorescence detection. Some assay variations quantify proteins directly in a sample, while other variations require host cell infection and incubation to allow virus growth prior to quantification. The variation used depends primarily on the amount of protein (i.e. viral protein) in the initial sample and the sensitivity of the assay itself. If incubation and virus growth are required, cell and/or virus lysis/digestion are often conducted prior to analysis. Most protein-based methods are relatively fast and sensitive [ citation needed ] but require quality standards for accurate calibration, and quantify protein, not actual virus particle concentrations. Below are specific examples of widely used protein-based assays.
The hemagglutination assay (HA) is a common non-fluorescence protein quantification assay specific for influenza . It relies on the fact that hemagglutinin , a surface protein of influenza viruses, agglutinates red blood cells (i.e. causes red blood cells to clump together). In this assay, dilutions of an influenza sample are incubated with a 1% erythrocyte solution for one hour and the virus dilution at which agglutination first occurs is visually determined. The assay produces a result of hemagglutination units (HAU), with typical PFU to HAU ratios in the 10 6 range. [ 12 ] [ 13 ] [ 14 ] This assay takes ~1–2 hours to complete.
The hemagglutination inhibition assay is a common variation of the HA assay used to measure flu-specific antibody levels in blood serum. In this variation, serum antibodies to the influenza virus will interfere with the virus attachment to red blood cells. Therefore, hemagglutination is inhibited when antibodies are present at a sufficient concentration. [ 15 ]
The bicinchoninic acid assay (BCA; a.k.a. Smith assay) is based on a simple colorimetric measurement and is a commonly used protein quantification assay. [ 16 ] BCA is similar to the Lowry or Bradford protein assays. The BCA assay reagent was first developed and made commercially by Pierce Chemical Company (now owned by Thermo Fisher Scientific ) which held the patent until 2006. [ 17 ] [ 18 ]
In the BCA assay, a protein's peptide bonds quantitatively reduce Cu 2+ to Cu 1+ , which produces a light blue color. BCA chelates Cu 1+ at a 2:1 ratio resulting in a more intensely colored species that absorbs at562 nm. Absorbance of a sample at 562 nm is used to determine the bulk protein concentration in the sample. Assay results are compared with known standard curves after analysis with a spectrophotometer or plate reader . [ 19 ] Total assay time is 30 minutes to one hour. While this assay is ubiquitous and fast, it lacks specificity to viral proteins since it counts all protein in the sample. Thus the virus preparation to be quantified must contain very low levels of host cell proteins.
Enzyme-linked immunosorbent assay ( ELISA ) is an antibody -based assay that utilizes an antigen -specific antibody chemically linked to an enzyme (or bound to a second antibody linked to an enzyme) to detect the presence of an unknown amount of the antigen (e.g., viral protein) in a sample. The antibody-antigen binding event is detected and/or quantified through the enzyme's ability to convert a substrate reagent to produce a detectable signal that can then be used to calculate the concentration of the target antigen in the sample. [ 20 ] Horseradish peroxidase (HRP) is a common enzyme utilized in ELISA schemes due to its ability to amplify signal and increase assay sensitivity.
There are many variations, or types of ELISA assays but they can generally be classified as either indirect , competitive , sandwich or reverse . [ 21 ]
Single radial immunodiffusion assay (SRID), also known as the Mancini method, is a protein assay that detects the amount of specific viral antigen by immunodiffusion in a semi-solid medium (e.g. agar). The medium contains antiserum specific to the antigen of interest and the antigen is placed in the center of the disc. As the antigen diffuses into the medium it creates a precipitate ring that grows until equilibrium is reached. Assay time can range from 10 hours to days depending on equilibration time of the antigen and antibody. The zone diameter from the ring is linearly related to the log of protein concentration and is compared to zone diameters for known protein standards for quantification. [ 22 ]
Quantitative PCR utilizes polymerase chain reaction chemistry to amplify viral DNA or RNA to produce high enough concentrations for detection and quantification by fluorescence. In general, quantification by qPCR relies on serial dilutions of standards of known concentration being analyzed in parallel with the unknown samples for calibration and reference. Quantitative detection can be achieved using a wide variety of fluorescence detection strategies, including sequence specific probes or non-specific fluorescent dyes such as SYBR Green . [ 23 ] Sequence-specific probes, such as TaqMan Molecular Beacons, or Scorpion, bind only to the DNA of the appropriate sequence produced during the reaction. SYBR Green dye binds to all double-stranded DNA [ 24 ] produced during the reaction.
While SYBR Green is easy to use, its lack of specificity and lower sensitivity lead most labs to use probe-based qPCR detection schemes. [ citation needed ] There are many variations of qPCR including the comparative threshold method, which allows relative quantification through comparison of Ct values (PCR cycles that show statistically significant increases in the product) from multiple samples that include an internal standard. [ 25 ]
PCR amplifies all target nucleic acid , including ones originating from intact infectious viral particles, from defective viral particles as well as free nucleic acid in solution. Because of this, qPCR results (expressed in terms of genome copies/mL) are likely to be higher in quantity than TEM results. For viral quantification, the ratio of whole virions to copies of nucleic acid is seldom one to one. This is because during viral replication, the nucleic acid and viral proteins are not always produced in 1:1 ratio and viral assembly process results in complete virions as well as empty capsids and/or excess free viral genomes. In the example of foot-and-mouth disease virus, the ratio of whole virions to RNA copies within an actively replicating host cell is approximately 1:1000. [ 26 ] Advantages of titration by qPCR include quick turnaround time (1–4 hours) and sensitivity (can detect much lower concentration of viruses than other methods).
Tunable resistive pulse sensing (TRPS) is a method that allows high-throughput single particle measurements of individual virus particles, as they are driven through a size-tunable nanopore , one at a time. [ 27 ] The technique has the advantage of simultaneously determining the size and concentration, of virus particles in solution with high resolution. This can be used in assessing sample stability and the contribution of aggregates, as well as total viral particle concentration (vp/mL). [ 28 ]
TRPS-based measurement occurs in an ionic buffer, and no pre-staining of samples is required prior to analysis, thus the technique is more rapid than those which require pre-treatment with fluorescent dyes, with a total preparation and measurement time of less than 10 minutes per sample. [ citation needed ] TRPS-bases virus analysis is commercially available through qViro-X systems , which have the ability to be decontaminated chemically by autoclaving after measurement has occurred.
This technique is similar to Single Particle Inductively Coupled Plasma Mass Spectroscopy (SP ICP-MS ) discovered by Degueldre and Favarger (2003) [ 29 ] and adapted later for other nanoparticles (e.g. gold colloids, see Degueldre et al. (2006)). [ 30 ] The SP ICP-MS was adapted for the analysis of Single Virus Inductively Coupled Plasma Mass Spectroscopy (SV ICPMS) in a comprehensive study i.e. Degueldre (2021). [ 31 ] This study suggests to adapting this method for single viruses (SV) identification and counting. With high resolution multi-channel sector field (MC SF) ICP-MS records in SV detection mode, the counting of master and key ions can allow analysis and identification of single viruses. The counting of 2-500 virial units can be performed in 20 s. Analyses are proposed to be carried out in Ar torch for master ions : 12C+, 13C+, 14N+, 15N+, and key ions 31P+, 32S+, 33S+ and 34S+. All interferences are discussed in detail. The use of high resolution MC ICP-MS is recommended while options with anaerobic/aerobic atmospheres are explored to upgrade the analysis when using quadrupole ICP-MS. Application for two virus types (SARS-COV2 and bacteriophage T5) is investigated using time scan and fixed mass analysis for the selected virus ions allowing characterisation of the species using the N/C, P/C and S/C molar ratio's and quantification of their number concentration.
While most flow cytometers do not have sufficient sensitivity, [ citation needed ] there are a few commercially available flow cytometers that can be used for virus quantification. A virus counter quantifies the number of intact virus particles in a sample using fluorescence to detect colocalized proteins and nucleic acids. Samples are stained with two dyes, one specific for proteins and one specific for nucleic acids, and analyzed as they flow through a laser beam. The quantity of particles producing simultaneous events on each of the two distinct fluorescence channels is determined, along with the measured sample flow rate, to calculate a concentration of virus particles (vp/mL). [ 32 ] The results are generally similar in absolute quantity to a TEM result. The assay has a linear working range of 10 5 –10 9 vp/mL and an analysis time of ~10 min with a short sample preparation time. [ citation needed ]
TEM is a specialized type of microscopy that utilizes a beam of electrons focused with a magnetic field to image a sample. TEM provides imaging with 1000x greater spatial resolution than a light microscope (resolution down to 0.2 nm). [ 33 ] An ultrathin, negatively stained sample is required. Sample preparations involve depositing specimens onto a coated TEM grid and negative staining with an electron-opaque liquid. [ 34 ] Tissue embedded samples can also be examined if thinly sectioned. Sample preparations vary depending on protocol and user but generally require hours to complete. TEM images can show individual virus particles and quantitative image analysis can be used to determine virus concentrations. These high resolution images also provide particle morphology information that most other methods cannot. Quantitative TEM results will often be greater than results from other assays [ citation needed ] as all particles, regardless of infectivity, are quantified in the reported virus-like particles per mL (vlp/mL) result. Quantitative TEM generally works well for virus concentrations greater than 10 6 particles/mL. Because of high instrument cost and the amount of space and support facilities needed, TEM equipment is only available in a few laboratories.
|
https://en.wikipedia.org/wiki/Virus_quantification
|
Vis medicatrix naturae (literally "the healing power of nature", and also known as natura medica ) is the Latin rendering of the Greek Νόσων φύσεις ἰητροί ("Nature is the physician(s) of diseases"), a phrase attributed to Hippocrates . While the phrase is not actually attested in his corpus, [ 1 ] it nevertheless sums up one of the guiding principles of Hippocratic medicine, which is that organisms left alone can often heal themselves ( cf. the Hippocratic primum non nocere ).
Hippocrates believed that an organism is not passive to injuries or disease , but rebalances itself to counteract them. The state of illness , therefore, is not a malady but an effort of the body to overcome a disturbed equilibrium. It is this capacity of organisms to correct imbalances that distinguishes them from non-living matter. [ 2 ]
From this follows the medical approach that “nature is the best physician” or “nature is the healer of disease”. To do this Hippocrates considered a doctor's chief aim was to help this natural tendency of the body by observing its action, removing obstacles to its action, and thus allow an organism to recover its own health. [ 3 ] This underlies such Hippocratic practices as blood letting in which a perceived excess of a humors is removed, and thus was taken to help the rebalancing of the body's humor . [ 4 ]
After Hippocrates, the idea of vis medicatrix naturae continued to play a key role in medicine. In the early Renaissance , the physician and early scientist Paracelsus had the idea of “inherent balsam”. Thomas Sydenham , in the 18th century considered fever as a healing force of nature. [ 3 ]
In the nineteenth-century, vis medicatrix naturae came to be interpreted as vitalism , and in this form it came to underlie the philosophical framework of homeopathy , chiropractic , hydropathy , osteopathy and naturopathy . [ 5 ]
Walter Cannon 's notion of homeostasis also has its origins in vis medicatrix naturae . "All that I have done thus far in reviewing the various protective and stabilizing devices of the body is to present a modern interpretation of the natural vis medicatrix.". [ 6 ] In this, Cannon stands in contrast to Claude Bernard (the father of modern physiology ), and his earlier idea of milieu interieur that he proposed to replace vitalistic ideas about the body. [ 6 ] However, both the notions of homeostasis and milieu interieur are ones concerned with how the body's physiology regulates itself through multiple mechanical equilibrium adjustment feedbacks rather than nonmechanistic life forces .
More recently, evolutionary medicine has identified many medical symptoms such as fever , inflammation , sickness behavior , and morning sickness as evolved adaptations that function as darwinian medicatrix naturae due to their selection as means to protect, heal, or restore the injured, infected or physiologically disrupted body. [ 7 ]
|
https://en.wikipedia.org/wiki/Vis_medicatrix_naturae
|
Vis viva (from the Latin for "living force") is a historical term used to describe a quantity similar to kinetic energy in an early formulation of the principle of conservation of energy .
Proposed by Gottfried Leibniz over the period 1676–1689, the theory was controversial as it seemed to oppose the theory of conservation of quantity of motion advocated by René Descartes . [ 1 ] Descartes' quantity of motion was different from momentum , but Newton defined the quantity of motion as the conjunction of the quantity of matter and velocity in Definition II of his Principia . In Definition III, he defined the force that resists a change in motion as the vis inertia of Descartes. Newton's third law of motion (for every action there is an equal and opposite reaction) is also equivalent to the principle of conservation of momentum . Leibniz accepted the principle of conservation of momentum, but rejected the Cartesian version of it. [ 1 ] The difference between these ideas was whether the quantity of motion was simply related to a body's resistance to a change in velocity (vis inertia) or whether a body's amount of force due to its motion (vis viva) was related to the square of its velocity.
The theory was eventually absorbed into the modern theory of energy , though the term still survives in the context of celestial mechanics through the vis viva equation . The English equivalent "living force" was also used, for example by George William Hill . [ 2 ]
The term is due to the German philosopher Gottfried Wilhelm Leibniz , who was the first to attempt a mathematical formulation from 1676 to 1689. Leibniz noticed that in many mechanical systems (of several masses , m i each with velocity v i ) the quantity [ 3 ]
was conserved. He called this quantity the vis viva or "living force" of the system. [ 3 ] The principle represented an accurate statement of the conservation of kinetic energy in elastic collisions that was independent of the conservation of momentum .
However, many physicists at the time were unaware of this fact and, instead, were influenced by the prestige of Sir Isaac Newton in England and of René Descartes in France , both of whom advanced the conservation of momentum as a guiding principle. Thus the momentum : [ 3 ]
was held by the rival camp to be the conserved vis viva . It was largely engineers such as John Smeaton , Peter Ewart , Karl Holtzmann , Gustave-Adolphe Hirn and Marc Seguin who objected that conservation of momentum alone was not adequate for practical calculation and who made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston .
The French mathematician Émilie du Châtelet , who had a sound grasp of Newtonian mechanics, developed Leibniz's concept and, combining it with the observations of Willem 's Gravesande , showed that vis viva was dependent on the square of the velocities. [ 4 ]
Members of the academic establishment such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics , but in the 18th and 19th centuries, the fate of the lost energy was still unknown. Gradually it came to be suspected that the heat inevitably generated by motion was another form of vis viva . In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of vis viva and caloric theory . [1] Count Rumford 's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat. Vis viva began to be known as energy after Thomas Young first used the term in 1807.
The recalibration of vis viva to include the coefficient of a half, namely:
was largely the result of the work of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819–1839, [ 6 ] although the present-day definition can occasionally be found earlier (e.g., in Daniel Bernoulli 's texts). The former called it the quantité de travail (quantity of work) and the latter, travail mécanique (mechanical work) and both championed its use in engineering calculation.
|
https://en.wikipedia.org/wiki/Vis_viva
|
A visbreaker is a processing unit in an oil refinery whose purpose is to minimize the quantity of residual oil produced in the distillation of crude oil and to increase the yield of more valuable middle distillates ( heating oil and diesel ) by the refinery. A visbreaker thermally cracks large hydrocarbon molecules in the oil by heating in a furnace to lower its viscosity and to produce small quantities of light hydrocarbons. ( LPG and gasoline ). [ 1 ] [ 2 ] [ 3 ] The process name of "visbreaker" refers to the fact that the process lowers (i.e., breaks) the viscosity of the residual oil. The process is non- catalytic .
The objectives of visbreaking are:
The term coil (or furnace ) visbreaking is applied to units where the cracking process occurs in the furnace tubes (or "coils"). Material exiting the furnace is quenched to halt the cracking reactions: frequently this is achieved by heat exchange with the virgin material being fed to the furnace, which in turn is a good energy efficiency step, but sometimes a stream of cold oil (usually gas oil) is used to the same effect. The gas oil is recovered and re-used. The extent of the cracking reaction is controlled by regulation of the speed of flow of the oil through the furnace tubes. The quenched oil then passes to a fractionator where the products of the cracking (gas, LPG, gasoline, gas oil and tar) are separated and recovered.
In soaker visbreaking, the bulk of the cracking reaction occurs not in the furnace but in a drum located after the furnace called the soaker. Here the oil is held at an elevated temperature for a pre-determined period of time to allow cracking to occur before being quenched. The oil then passes to a fractionator . In soaker visbreaking, lower temperatures are used than in coil visbreaking. The comparatively long duration of the cracking reaction is used instead.
Visbreaker tar can be further refined by feeding it to a vacuum fractionator . Here additional heavy gas oil may be recovered and routed either to catalytic cracking , hydrocracking or thermal cracking units on the refinery. The vacuum-flashed tar (sometimes referred to as pitch ) is then routed to fuel oil blending. In a few refinery locations, visbreaker tar is routed to a delayed coker for the production of certain specialist cokes such as anode coke or needle coke .
From the standpoint of yield, there is little or nothing to choose between the two approaches. However, each offers significant advantages in particular situations:
The quality of the feed going into a visbreaker will vary considerably with the type of crude oil that the refinery is processing. The following is a typical quality for the vacuum distillation residue of Arabian light (a crude oil from Saudi Arabia and widely refined around the world):
Once this material has been run through a visbreaker (and, again, there will be considerable variation from visbreaker to visbreaker as no two will operate under exactly the same conditions) the lowering in viscosity is dramatic:
The yields of the various hydrocarbon products will depend on the "severity" of the cracking operation as determined by the temperature the oil is heated to in the visbreaker furnace. At the low end of the scale, a furnace heating to 425 °C would crack only mildly, while operations at 500 °C would be considered as very severe. Arabian light crude residue when visbroken at 450 °C would yield around 76% (by weight) of tar, 15% middle distillates, 6% gasolines and 3% gas and LPG.
The severity of visbreaker operation is normally limited by the need to produce a visbreaker tar that can be blended to make a stable fuel oil.
Stability in this case is taken to mean the tendency of a fuel oil to produce sediments when stored. These sediments are undesirable as they can quickly foul the filters of pumps used to move the oil necessitating time-consuming maintenance.
Vacuum residue fed to a visbreaker can be considered to be composed of the following:
Visbreaking preferentially cracks aliphatic compounds which have relatively low sulphur contents, low density and high viscosity and the effect of their removal can be clearly seen in the change in quality between feed and product. A too severe cracking in a visbreaker will lead to the asphaltene colloid becoming metastable. Subsequent addition of a diluent to manufacture a finished fuel oil can cause the colloid to break down, precipitating asphaltenes as a sludge. It has been observed that a paraffinic diluent is more likely to cause precipitation than an aromatic one. Stability of fuel oil is assessed using a number of proprietary tests (for example "P" value and SHF tests).
The viscosity blending of two or more liquids having different viscosities is a three-step procedure. The first step is to calculate the Viscosity Blending Index (VBI) of each component of the blend using the following equation (known as a Refutas equation): [ 2 ] [ 4 ]
where v is the viscosity in square millimeters per second (mm²/s) or centistokes (cSt) and ln is the natural logarithm (log e ). It is important that the viscosity of each component of the blend be obtained at the same temperature.
The next step is to calculate the VBN of the blend, using this equation:
where w is the weight fraction (i.e., % ÷ 100) of each component of the blend.
Once the viscosity blending number of a blend has been calculated using equation (2), the final step is to determine the viscosity of the blend by using the invert of equation (1):
where VBN is the viscosity blending number of the blend and e is the transcendental number 2.71828, also known as Euler's number .
A marketable fuel oil, such as for fueling a power station, might be required to have a viscosity of 40 centistokes at 100 °C. It might be prepared using either the virgin or visbroken residue described above combined with a distillate diluent ("cutter stock"). Such a cutter stock could typically have a viscosity at 100 °C of 1.3 centistokes . Rearranging equation (2) above for a simple two component blend shows that the percentage of cutterstock required in the blend is found by:
(4) %cutter stock = [VBN 40 − VBN residue ] ÷ [VBN cutter stock − VBN residue ]
Using the viscosities quoted in the tables above for the residues from Arab Light crude oil and calculating VBNs according to equation (1) gives:
For virgin residue (i.e., the unconverted feed to the visbreaker): 27.5% cutter stock in the blend
For visbroken residue: 13.3% cutter stock in the blend.
As middle distillates have a far higher value in the market place than fuel oils, it can be seen that the use of a visbreaker will considerably improve the economics of fuel oil manufacture. For example, if the cutter stock is taken to have a value of $300 per tonne and fuel oil $150 per ton (oil prices naturally change quickly, but these prices, and more importantly the differences between them, are not unrealistic), it is a simple matter to calculate the value of the different residues in this example as being:
Virgin residue: $93.1 per tonne
Visbroken residue: $127.0 per tonne
|
https://en.wikipedia.org/wiki/Visbreaker
|
Visceral pain is defined as pain that results from the activation of nociceptors of the thoracic, pelvic, or abdominal viscera (organs) in the human body. Visceral structures are highly sensitive to distension (stretch), ischemia and inflammation , but relatively insensitive to other stimuli that normally evoke pain such as cutting or burning.
Visceral pain is diffuse, difficult to localize, and often referred to a distant, usually superficial, structure. [ 1 ] It may be accompanied by symptoms such as nausea, vomiting , changes in vital signs as well as emotional manifestations. The pain may be described as sickening, throbbing, pulsating, deep, squeezing, and/or dull. [ 2 ] Distinct structural lesions or biochemical abnormalities explain this type of pain in only a proportion of patients. These diseases are grouped under gastrointestinal neuromuscular diseases (GINMD). Some people may experience occasional visceral pain episodes, often very intense in nature, without any evidence of structural, biochemical or histolopathologic reason for such symptoms. These diseases are grouped under functional gastrointestinal disorders (FGID) and the pathophysiology and treatment can vary greatly from gastrointestinal neuromuscular diseases. The two major single entities among functional disorders of the gut are functional dyspepsia and irritable bowel syndrome . [ 3 ]
Visceral hypersensitivity is hypersensitive visceral pain perception, which is commonly experienced by individuals with functional gastrointestinal disorders . [ 4 ]
In the past, viscera were considered insensitive to pain but now it is clear that pain from internal organs is widespread and that its social burden may surpass that of pain from superficial ( somatic ) sources. Myocardial ischemia, the most frequent cause of cardiac pain, is the most common cause of death in the United States. [ 5 ] Urinary colic produced from ureteral stones has been categorized as one of the most intense forms of pain that a human being can experience. The prevalence of such stones has continuously increased, reaching values of over 20% in developed countries. [ 6 ] [ 7 ] Surveys have shown prevalence rates among adults of 25% for intermittent abdominal pain and 20% for chest pain; 24% of women experience pelvic pain at any point in time. For over two-thirds of those affected, pain is accepted as part of daily life and symptoms are self-managed; a small proportion defer to specialists for help. Visceral pain conditions are associated with diminished quality of life, and exert a huge cost burden through medical expenses and lost productivity in the workplace. [ 8 ]
Visceral pain should be suspected when vague midline sensations of malaise are reported by a patient. True visceral pain is characterized as a vague, diffuse, and poorly defined sensation. [ 9 ] [ 10 ] Regardless of specific organ of origin, the pain is usually perceived in the midline spanning anywhere from the lower abdomen up to the chest. In the early phases the pain is perceived in the same general area and it has a temporal evolution, making the onset sensation insidious and difficult to identify. [ 11 ]
The pain is typically associated with involvement of the autonomic nervous system . Some of these symptoms include pallor, sweating, nausea, vomit, changes in vital signs including blood pressure , heart rate and/or temperature. Strong emotional reactions are also common presenting signs and may include anxiety, anguish and a sense of impending doom . Visceral pathology may also manifest only through emotional reactions and discomfort where no pain is reported. The intensity of visceral pain felt might have no relationship to the extent of internal injury. [ 11 ] [ 12 ]
Visceral pain changes in nature as it progresses. Pain from a specific organ can be experienced, or "referred" to different sites of the body. There is no pathology or no cause for pain at these referred somatic sites however the pain will be experienced at this location, often with significant intensity. Referred pain is sharper, better localized, and less likely to be accompanied by autonomic or emotional signs. [ 10 ] [ 12 ]
A good example of visceral pain that is commonplace and embodies the wide spectrum of clinical presentations discussed above is a myocardial infarction (MI), more commonly known as a heart attack. This pain is secondary to ischemia of the cardiac tissue. The most common presenting symptom is chest pain that is often described as tightness, pressure or squeezing. The onset of symptoms is usually gradual, over several minutes and tends to be located in the central chest (overlying the sternum ) although it can be experienced in the left chest, right chest, and even abdominal area. Associated symptoms, which are mostly autonomic in nature, include diaphoresis , nausea , vomiting , palpitations , and anxiety (which is often described as a sense of impending doom). [ 13 ] [ 14 ] Referred pain is experienced most commonly radiating down the left arm however it can also radiate to the lower jaw , neck , back and epigastrium . Some patients, especially elderly and diabetics , may present with what is known as a painless myocardial infarction or a "silent heart attack". A painless MI can present with all of the associated symptoms of a heart attack, including nausea, vomiting, anxiety, heaviness, or choking, but the classic chest pain described above is lacking. [ 9 ] [ 15 ]
It is always important for not only the physician but also the patient to remember the dissociation between magnitude of injury to internal organs and the intensity of pain and how this can be potentially dangerous if overlooked, for example in a silent heart attack. [ 16 ] More rarely intense visceral pain does not signify a significant pathologic process, for example intense gas pains.
A vague and poorly defined sensation, as well as the temporal nature of visceral pain, is due to the low density of sensory innervation of viscera and the extensive divergence of visceral input within the central nervous system (CNS). [ 9 ] [ 10 ] The phenomenon of referred pain is secondary to the convergence of visceral afferent (sensory) nerve fibers entering the spinal cord at the same level as the superficial, somatic structures experiencing the pain. This leads to a misinterpretation of incoming signals by higher brain centers. [ 10 ] [ 12 ]
There are two goals when treating visceral pain:
Treatment of the pain in many circumstances should be deferred, until the origin of the symptoms has been identified. Masking pain may confound the diagnostic process and delay the recognition of life-threatening conditions. Once a treatable condition has been identified there is no reason to withhold symptomatic treatment. Also, if cause for the pain is not found in reasonable time then symptomatic treatment of the pain could be of benefit to the patient in order to prevent long-term sensitization and provide immediate relief. [ 11 ] [ 17 ] [ 18 ]
Symptomatic treatment of visceral pain relies primarily upon pharmacotherapy . Since visceral pain can result secondary to a wide variety of causes, with or without associated pathology, a wide variety of pharmacological classes of drugs are used including a variety of analgesics (ex. opiates , NSAIDs , cannabinoids ), antispasmodics (ex. loperamide , benzodiazepines ), antidepressants (ex. TCA , SSRI , SNRI ) as well as others (ex. ketamine , clonidine , gabapentin ). In addition, pharmacotherapy that targets the underlying cause of the pain can help alleviate symptoms due to lessening visceral nociceptive inputs. [ 7 ] For example, the use of nitrates can reduce anginal pain by dilating the coronary arteries and thus reducing the ischemia causing the pain. The use of spasmolytics (antispasmodics) can help alleviate pain from a gastrointestinal obstruction by inhibiting the contraction of the gut. [ 9 ] There are issues associated with pharmacotherapy that include side effects (ex. constipation associated with opiate use), chemical dependence or addiction , and inadequate pain relief.
Invasive therapies are in general reserved for patients in whom pharmacological and other non-invasive therapies are ineffective. A wide variety of interventions are available and shown to be effective, a few will be discussed here. Approximately 50–80% of pelvic cancer pain patients benefit from nerve blocks . [ 19 ] [ 20 ] Nerve blocks offer temporary relief and typically involve injection of a nerve bundle with either a local anesthetic , a steroid , or both. Permanent nerve block can be produced by destruction of nerve tissue. Strong evidence from multiple randomized controlled trials support the use of neurolytic celiac plexus block to alleviate pain and reduce opioid consumption in patients with malignant pain originating from abdominal viscera such as the pancreas . [ 21 ] Neurostimulation , from a device such as a spinal cord stimulator (SCS), for refractory angina has been shown to be effective in several randomized controlled trials. [ 22 ] [ 23 ] A SCS may also be used for other chronic pain conditions such as chronic pancreatitis and familial Mediterranean fever . Other devices that have shown benefit in reducing pain include transcutaneous electrical nerve stimulators (TENS), targeted field stimulation, both used for somatic hyperalgesic states, external neuromodulation , pulsed radiofrequency ablation and neuraxial drug delivery systems. [ 16 ] [ 24 ]
|
https://en.wikipedia.org/wiki/Visceral_pain
|
Viscimation is the turbulence when liquids of different viscosities mix,
particularly the formation of vortices (also known as "viscimetric whorls") and visible separate threads of the different liquids.
The term viscimation is archaic and idiosyncratic to whisky tasting; their study (or appreciation) is called viscimetry , and the capacity of a whisky to sustain viscimation (which is predominantly its alcohol percentage) is viscimetric potential or viscimetric index.
Causing viscometric whorls by adding water to liquor is colloquially called "awakening the serpent".
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Viscimation
|
In materials science and continuum mechanics , viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation . Viscous materials, like water, resist both shear flow and strain linearly with time when a stress is applied. Elastic materials strain when stretched and immediately return to their original state once the stress is removed.
Viscoelastic materials have elements of both of these properties and, as such, exhibit time-dependent strain. Whereas elasticity is usually the result of bond stretching along crystallographic planes in an ordered solid, viscosity is the result of the diffusion of atoms or molecules inside an amorphous material. [ 1 ]
In the nineteenth century, physicists such as James Clerk Maxwell , Ludwig Boltzmann , and Lord Kelvin researched and experimented with creep and recovery of glasses , metals , and rubbers . Viscoelasticity was further examined in the late twentieth century when synthetic polymers were engineered and used in a variety of applications. [ 2 ] Viscoelasticity calculations depend heavily on the viscosity variable, η . The inverse of η is also known as fluidity , φ . The value of either can be derived as a function of temperature or as a given value (i.e. for a dashpot ). [ 1 ]
Depending on the change of strain rate versus stress inside a material, the viscosity can be categorized as having a linear, non-linear, or plastic response. When a material exhibits a linear response it is categorized as a Newtonian material . In this case the stress is linearly proportional to the strain rate. If the material exhibits a non-linear response to the strain rate, it is categorized as non-Newtonian fluid . There is also an interesting case where the viscosity decreases as the shear/strain rate remains constant. A material which exhibits this type of behavior is known as thixotropic . In addition, when the stress is independent of this strain rate, the material exhibits plastic deformation. [ 1 ] Many viscoelastic materials exhibit rubber like behavior explained by the thermodynamic theory of polymer elasticity.
Some examples of viscoelastic materials are amorphous polymers, semicrystalline polymers, biopolymers, metals at very high temperatures, and bitumen materials. Cracking occurs when the strain is applied quickly and outside of the elastic limit. Ligaments and tendons are viscoelastic, so the extent of the potential damage to them depends on both the rate of the change of their length and the force applied. [ citation needed ]
A viscoelastic material has the following properties:
Unlike purely elastic substances, a viscoelastic substance has an elastic component and a viscous component. The viscosity of a viscoelastic substance gives the substance a strain rate dependence on time. Purely elastic materials do not dissipate energy (heat) when a load is applied, then removed. However, a viscoelastic substance dissipates energy when a load is applied, then removed. Hysteresis is observed in the stress–strain curve, with the area of the loop being equal to the energy lost during the loading cycle. Since viscosity is the resistance to thermally activated plastic deformation, a viscous material will lose energy through a loading cycle. Plastic deformation results in lost energy, which is uncharacteristic of a purely elastic material's reaction to a loading cycle. [ 1 ]
Specifically, viscoelasticity is a molecular rearrangement. When a stress is applied to a viscoelastic material such as a polymer , parts of the long polymer chain change positions. This movement or rearrangement is called creep . Polymers remain a solid material even when these parts of their chains are rearranging in order to accommodate the stress, and as this occurs, it creates a back stress in the material. When the back stress is the same magnitude as the applied stress, the material no longer creeps. When the original stress is taken away, the accumulated back stresses will cause the polymer to return to its original form. The material creeps, which gives the prefix visco-, and the material fully recovers, which gives the suffix -elasticity. [ 2 ]
Linear viscoelasticity is when the function is separable in both creep response and load. All linear viscoelastic models can be represented by a Volterra equation connecting stress and strain : ε ( t ) = σ ( t ) E inst,creep + ∫ 0 t K ( t − t ′ ) σ ˙ ( t ′ ) d t ′ {\displaystyle \varepsilon (t)={\frac {\sigma (t)}{E_{\text{inst,creep}}}}+\int _{0}^{t}K(t-t'){\dot {\sigma }}(t')dt'} or σ ( t ) = E inst,relax ε ( t ) + ∫ 0 t F ( t − t ′ ) ε ˙ ( t ′ ) d t ′ {\displaystyle \sigma (t)=E_{\text{inst,relax}}\varepsilon (t)+\int _{0}^{t}F(t-t'){\dot {\varepsilon }}(t')dt'} where
Linear viscoelasticity is usually applicable only for small deformations .
Nonlinear viscoelasticity is when the function is not separable. It usually happens when the deformations are large or if the material changes its properties under deformations. Nonlinear viscoelasticity also elucidates observed phenomena such as normal stresses, shear thinning, and extensional thickening in viscoelastic fluids. [ 3 ]
An anelastic material is a special case of a viscoelastic material: an anelastic material will fully recover to its original state on the removal of load.
When distinguishing between elastic, viscous, and forms of viscoelastic behavior, it is helpful to reference the time scale of the measurement relative to the relaxation times of the material being observed, known as the Deborah number (De) where: [ 3 ] D e = λ / t {\displaystyle De=\lambda /t} where
Viscoelasticity is studied using dynamic mechanical analysis , applying a small oscillatory stress and measuring the resulting strain.
A complex dynamic modulus G can be used to represent the relations between the oscillating stress and strain: G = G ′ + i G ″ {\displaystyle G=G'+iG''} where i 2 = − 1 {\displaystyle i^{2}=-1} ; G ′ {\displaystyle G'} is the storage modulus and G ″ {\displaystyle G''} is the loss modulus : G ′ = σ 0 ε 0 cos δ {\displaystyle G'={\frac {\sigma _{0}}{\varepsilon _{0}}}\cos \delta } G ″ = σ 0 ε 0 sin δ {\displaystyle G''={\frac {\sigma _{0}}{\varepsilon _{0}}}\sin \delta } where σ 0 {\displaystyle \sigma _{0}} and ε 0 {\displaystyle \varepsilon _{0}} are the amplitudes of stress and strain respectively, and δ {\displaystyle \delta } is the phase shift between them.
Viscoelastic materials, such as amorphous polymers, semicrystalline polymers, biopolymers and even the living tissue and cells, [ 4 ] can be modeled in order to determine their stress and strain or force and displacement interactions as well as their temporal dependencies. These models, which include the Maxwell model , the Kelvin–Voigt model , the standard linear solid model , and the Burgers model , are used to predict a material's response under different loading conditions.
Viscoelastic behavior has elastic and viscous components modeled as linear combinations of springs and dashpots , respectively. Each model differs in the arrangement of these elements, and all of these viscoelastic models can be equivalently modeled as electrical circuits.
In an equivalent electrical circuit, stress is represented by current, and strain rate by voltage. The elastic modulus of a spring is analogous to the inverse of a circuit's inductance (it stores energy) and the viscosity of a dashpot to a circuit's resistance (it dissipates energy).
The elastic components, as previously mentioned, can be modeled as springs of elastic constant E, given the formula: σ = E ε {\displaystyle \sigma =E\varepsilon } where σ is the stress, E is the elastic modulus of the material, and ε is the strain that occurs under the given stress, similar to Hooke's law .
The viscous components can be modeled as dashpots such that the stress–strain rate relationship can be given as, σ = η d ε d t {\displaystyle \sigma =\eta {\frac {d\varepsilon }{dt}}} where σ is the stress, η is the viscosity of the material, and dε/dt is the time derivative of strain.
The relationship between stress and strain can be simplified for specific stress or strain rates. For high stress or strain rates/short time periods, the time derivative components of the stress–strain relationship dominate. In these conditions it can be approximated as a rigid rod capable of sustaining high loads without deforming. Hence, the dashpot can be considered to be a "short-circuit". [ 5 ] [ 6 ]
Conversely, for low stress states/longer time periods, the time derivative components are negligible and the dashpot can be effectively removed from the system – an "open" circuit. [ 6 ] As a result, only the spring connected in parallel to the dashpot will contribute to the total strain in the system. [ 5 ]
The Maxwell model can be represented by a purely viscous damper and a purely elastic spring connected in series, as shown in the diagram. The model can be represented by the following equation: σ + η E σ ˙ = η ε ˙ {\displaystyle \sigma +{\frac {\eta }{E}}{\dot {\sigma }}=\eta {\dot {\varepsilon }}}
Under this model, if the material is put under a constant strain, the stresses gradually relax . When a material is put under a constant stress, the strain has two components. First, an elastic component occurs instantaneously, corresponding to the spring, and relaxes immediately upon release of the stress. The second is a viscous component that grows with time as long as the stress is applied. The Maxwell model predicts that stress decays exponentially with time, which is accurate for most polymers. One limitation of this model is that it does not predict creep accurately. The Maxwell model for creep or constant-stress conditions postulates that strain will increase linearly with time. However, polymers for the most part show the strain rate to be decreasing with time. [ 2 ]
This model can be applied to soft solids: thermoplastic polymers in the vicinity of their melting temperature, fresh concrete (neglecting its aging), and numerous metals at a temperature close to their melting point.
The equation introduced here, however, lacks a consistent derivation from more microscopic model and is not observer independent. The Upper-convected Maxwell model is its sound formulation in terms of the Cauchy stress tensor and constitutes the simplest tensorial constitutive model for viscoelasticity (see e.g. [ 7 ] or [ 8 ] ).
The Kelvin–Voigt model, also known as the Voigt model, consists of a Newtonian damper and Hookean elastic spring connected in parallel, as shown in the picture. It is used to explain the creep behaviour of polymers.
The constitutive relation is expressed as a linear first-order differential equation: σ = E ε + η ε ˙ {\displaystyle \sigma =E\varepsilon +\eta {\dot {\varepsilon }}}
This model represents a solid undergoing reversible, viscoelastic strain. Upon application of a constant stress, the material deforms at a decreasing rate, asymptotically approaching the steady-state strain. When the stress is released, the material gradually relaxes to its undeformed state. At constant stress (creep), the model is quite realistic as it predicts strain to tend to σ/E as time continues to infinity. Similar to the Maxwell model, the Kelvin–Voigt model also has limitations. The model is extremely good with modelling creep in materials, but with regards to relaxation the model is much less accurate. [ 9 ]
This model can be applied to organic polymers, rubber, and wood when the load is not too high.
The standard linear solid model, also known as the Zener model, consists of two springs and a dashpot. It is the simplest model that describes both the creep and stress relaxation behaviors of a viscoelastic material properly. For this model, the governing constitutive relations are:
Under a constant stress, the modeled material will instantaneously deform to some strain, which is the instantaneous elastic portion of the strain. After that it will continue to deform and asymptotically approach a steady-state strain, which is the retarded elastic portion of the strain. Although the standard linear solid model is more accurate than the Maxwell and Kelvin–Voigt models in predicting material responses, mathematically it returns inaccurate results for strain under specific loading conditions.
The Jeffreys model like the Zener model is a three element model. It consist of two dashpots and a spring. [ 10 ]
It was proposed in 1929 by Harold Jeffreys to study Earth's mantle . [ 11 ]
The Burgers model consists of either two Maxwell components in parallel or a Kelvin–Voigt component, a spring and a dashpot in series. For this model, the governing constitutive relations are:
This model incorporates viscous flow into the standard linear solid model, giving a linearly increasing asymptote for strain under fixed loading conditions.
The generalized Maxwell model, also known as the Wiechert model, is the most general form of the linear model for viscoelasticity. It takes into account that the relaxation does not occur at a single time, but at a distribution of times. Due to molecular segments of different lengths with shorter ones contributing less than longer ones, there is a varying time distribution. The Wiechert model shows this by having as many spring–dashpot Maxwell elements as necessary to accurately represent the distribution. The figure on the right shows the generalised Wiechert model. [ 12 ] Applications: metals and alloys at temperatures lower than one quarter of their absolute melting temperature (expressed in K).
Non-linear viscoelastic constitutive equations are needed to quantitatively account for phenomena in fluids like differences in normal stresses, shear thinning, and extensional thickening. [ 3 ] Necessarily, the history experienced by the material is needed to account for time-dependent behavior, and is typically included in models as a history kernel K . [ 13 ]
The second-order fluid is typically considered the simplest nonlinear viscoelastic model, and typically occurs in a narrow region of materials behavior occurring at high strain amplitudes and Deborah number between Newtonian fluids and other more complicated nonlinear viscoelastic fluids. [ 3 ] The second-order fluid constitutive equation is given by:
T = − p I + 2 η 0 D − ψ 1 D ▽ + 4 ψ 2 D ⋅ D {\displaystyle \mathbf {T} =-p\mathbf {I} +2\eta _{0}\mathbf {D} -\psi _{1}\mathbf {D} ^{\triangledown }+4\psi _{2}\mathbf {D} \cdot \mathbf {D} }
where:
The upper-convected Maxwell model incorporates nonlinear time behavior into the viscoelastic Maxwell model, given by: [ 3 ]
τ + λ τ ▽ = 2 η 0 D {\displaystyle \mathbf {\tau } +\lambda \mathbf {\tau } ^{\triangledown }=2\eta _{0}\mathbf {D} } where τ {\displaystyle \mathbf {\tau } } denotes the stress tensor.
The Oldroyd-B model is an extension of the Upper Convected Maxwell model and is interpreted as a solvent filled with elastic bead and spring dumbbells.
The model is named after its creator James G. Oldroyd . [ 14 ] [ 15 ] [ 16 ]
The model can be written as: T + λ 1 T ∇ = 2 η 0 ( D + λ 2 D ∇ ) {\displaystyle \mathbf {T} +\lambda _{1}{\stackrel {\nabla }{\mathbf {T} }}=2\eta _{0}(\mathbf {D} +\lambda _{2}{\stackrel {\nabla }{\mathbf {D} }})} where:
Whilst the model gives good approximations of viscoelastic fluids in shear flow, it has an unphysical singularity in extensional flow, where the dumbbells are infinitely stretched. This is, however, specific to idealised flow; in the case of a cross-slot geometry the extensional flow is not ideal, so the stress, although singular, remains integrable, although the stress is infinite in a correspondingly infinitely small region. [ 16 ]
If the solvent viscosity is zero, the Oldroyd-B becomes the upper convected Maxwell model .
Wagner model is might be considered as a simplified practical form of the Bernstein–Kearsley–Zapas model. The model was developed by German rheologist Manfred Wagner .
For the isothermal conditions the model can be written as: σ ( t ) = − p I + ∫ − ∞ t M ( t − t ′ ) h ( I 1 , I 2 ) B ( t ′ ) d t ′ {\displaystyle \mathbf {\sigma } (t)=-p\mathbf {I} +\int _{-\infty }^{t}M(t-t')h(I_{1},I_{2})\mathbf {B} (t')\,dt'}
where:
The strain damping function is usually written as: h ( I 1 , I 2 ) = m ∗ exp ( − n 1 I 1 − 3 ) + ( 1 − m ∗ ) exp ( − n 2 I 2 − 3 ) {\displaystyle h(I_{1},I_{2})=m^{*}\exp(-n_{1}{\sqrt {I_{1}-3}})+(1-m^{*})\exp(-n_{2}{\sqrt {I_{2}-3}})} If the value of the strain hardening function is equal to one, then the deformation is small; if it approaches zero, then the deformations are large. [ 17 ] [ 18 ]
In a one-dimensional relaxation test, the material is subjected to a sudden strain that is kept constant over the duration of the test, and the stress is measured over time. The initial stress is due to the elastic response of the material. Then, the stress relaxes over time due to the viscous effects in the material. Typically, either a tensile, compressive, bulk compression, or shear strain is applied. The resulting stress vs. time data can be fitted with a number of equations, called models. Only the notation changes depending on the type of strain applied: tensile-compressive relaxation is denoted E {\displaystyle E} , shear is denoted G {\displaystyle G} , bulk is denoted K {\displaystyle K} . The Prony series for the shear relaxation is
G ( t ) = G ∞ + ∑ i = 1 N G i exp ( − t / τ i ) {\displaystyle G(t)=G_{\infty }+\sum _{i=1}^{N}G_{i}\exp(-t/\tau _{i})}
where G ∞ {\displaystyle G_{\infty }} is the long term modulus once the material is totally relaxed, τ i {\displaystyle \tau _{i}} are the relaxation times (not to be confused with τ i {\displaystyle \tau _{i}} in the diagram); the higher their values, the longer it takes for the stress to relax. The data is fitted with the equation by using a minimization algorithm that adjust the parameters ( G ∞ , G i , τ i {\displaystyle G_{\infty },G_{i},\tau _{i}} ) to minimize the error between the predicted and data values. [ 19 ]
An alternative form is obtained noting that the elastic modulus is related to the long term modulus by
G ( t = 0 ) = G 0 = G ∞ + ∑ i = 1 N G i {\displaystyle G(t=0)=G_{0}=G_{\infty }+\sum _{i=1}^{N}G_{i}}
Therefore,
G ( t ) = G 0 − ∑ i = 1 N G i [ 1 − e − t / τ i ] {\displaystyle G(t)=G_{0}-\sum _{i=1}^{N}G_{i}\left[1-e^{-t/\tau _{i}}\right]}
This form is convenient when the elastic shear modulus G 0 {\displaystyle G_{0}} is obtained from data independent from the relaxation data, and/or for computer implementation, when it is desired to specify the elastic properties separately from the viscous properties, as in Simulia (2010). [ 20 ]
A creep experiment is usually easier to perform than a relaxation one, so most data is available as (creep) compliance vs. time. [ 21 ] Unfortunately, there is no known closed form for the (creep) compliance in terms of the coefficient of the Prony
series. So, if one has creep data, it is not easy to get the coefficients of the (relaxation) Prony series, which are needed for example in. [ 20 ] An expedient way to obtain these coefficients is the following. First, fit the creep data with a model that has closed form solutions in both compliance and relaxation; for example the Maxwell-Kelvin model
(eq. 7.18-7.19) in Barbero (2007) [ 22 ] or the Standard Solid Model (eq. 7.20-7.21) in Barbero (2007) [ 22 ] (section 7.1.3). Once the parameters of the creep model are known, produce relaxation pseudo-data with the conjugate relaxation model for the same
times of the original data. Finally, fit the pseudo data with the Prony series.
The secondary bonds of a polymer constantly break and reform due to thermal motion. Application of a stress favors some conformations over others, so the molecules of the polymer will gradually "flow" into the favored conformations over time. [ 23 ] Because thermal motion is one factor contributing to the deformation of polymers, viscoelastic properties change with increasing or decreasing temperature. In most cases, the creep modulus, defined as the ratio of applied stress to the time-dependent strain, decreases with increasing temperature. Generally speaking, an increase in temperature correlates to a logarithmic decrease in the time required to impart equal strain under a constant stress. In other words, it takes less work to stretch a viscoelastic material an equal distance at a higher temperature than it does at a lower temperature.
More detailed effect of temperature on the viscoelastic behavior of polymer can be plotted as shown.
There are mainly five regions (some denoted four, which combines IV and V together) included in the typical polymers. [ 24 ]
Extreme cold temperatures can cause viscoelastic materials to change to the glass phase and become brittle . For example, exposure of pressure sensitive adhesives to extreme cold ( dry ice , freeze spray , etc.) causes them to lose their tack, resulting in debonding.
When subjected to a step constant stress, viscoelastic materials experience a time-dependent increase in strain. This phenomenon is known as viscoelastic creep.
At time t 0 {\displaystyle t_{0}} , a viscoelastic material is loaded with a constant stress that is maintained for a sufficiently long time period. The material responds to the stress with a strain that increases until the material ultimately fails, if it is a viscoelastic liquid. If, on the other hand, it is a viscoelastic solid, it may or may not fail depending on the applied stress versus the material's ultimate resistance. When the stress is maintained for a shorter time period, the material undergoes an initial strain until a time t 1 {\displaystyle t_{1}} , after which the strain immediately decreases (discontinuity) then gradually decreases at times t > t 1 {\displaystyle t>t_{1}} to a residual strain.
Viscoelastic creep data can be presented by plotting the creep modulus (constant applied stress divided by total strain at a particular time) as a function of time. [ 27 ] Below its critical stress, the viscoelastic creep modulus is independent of stress applied. A family of curves describing strain versus time response to various applied stress may be represented by a single viscoelastic creep modulus versus time curve if the applied stresses are below the material's critical stress value.
Viscoelastic creep is important when considering long-term structural design. Given loading and temperature conditions, designers can choose materials that best suit component lifetimes.
Shear rheometers are based on the idea of putting the material to be measured between two plates, one or both of which move in a shear direction to induce stresses and strains in the material. The testing can be done at constant strain rate, stress, or in an oscillatory fashion (a form of dynamic mechanical analysis ). [ 28 ] Shear rheometers are typically limited by edge effects where the material may leak out from between the two plates and slipping at the material/plate interface.
Extensional rheometers, also known as extensiometers, measure viscoelastic properties by pulling a viscoelastic fluid, typically uniaxially. [ 29 ] Because this typically makes use of capillary forces and confines the fluid to a narrow geometry, the technique is often limited to fluids with relatively low viscosity like dilute polymer solutions or some molten polymers. [ 29 ] Extensional rheometers are also limited by edge effects at the ends of the extensiometer and pressure differences between inside and outside the capillary. [ 3 ]
Despite the apparent limitations mentioned above, extensional rheometry can also be performed on high viscosity fluids. Although this requires the use of different instruments, these techniques and apparatuses allow for the study of the extensional viscoelastic properties of materials such as polymer melts. Three of the most common extensional rheometry instruments developed within the last 50 years are the Meissner-type rheometer, the filament stretching rheometer (FiSER), and the Sentmanat Extensional Rheometer (SER).
The Meissner-type rheometer, developed by Meissner and Hostettler in 1996, uses two sets of counter-rotating rollers to strain a sample uniaxially. [ 30 ] This method uses a constant sample length throughout the experiment, and supports the sample in between the rollers via an air cushion to eliminate sample sagging effects. It does suffer from a few issues – for one, the fluid may slip at the belts which leads to lower strain rates than one would expect. Additionally, this equipment is challenging to operate and costly to purchase and maintain.
The FiSER rheometer simply contains fluid in between two plates. During an experiment, the top plate is held steady and a force is applied to the bottom plate, moving it away from the top one. [ 31 ] The strain rate is measured by the rate of change of the sample radius at its middle. It is calculated using the following equation: ϵ ˙ = − 2 R d R d t {\displaystyle {\dot {\epsilon }}=-{\frac {2}{R}}{dR \over dt}} where R {\displaystyle R} is the mid-radius value and ϵ ˙ {\displaystyle {\dot {\epsilon }}} is the strain rate. The viscosity of the sample is then calculated using the following equation: η = F π R 2 ϵ ˙ {\displaystyle \eta ={\frac {F}{\pi R^{2}{\dot {\epsilon }}}}} where η {\displaystyle \eta } is the sample viscosity, and F {\displaystyle F} is the force applied to the sample to pull it apart.
Much like the Meissner-type rheometer, the SER rheometer uses a set of two rollers to strain a sample at a given rate. [ 32 ] It then calculates the sample viscosity using the well known equation: σ = η ϵ ˙ {\displaystyle \sigma =\eta {\dot {\epsilon }}} where σ {\displaystyle \sigma } is the stress, η {\displaystyle \eta } is the viscosity and ϵ ˙ {\displaystyle {\dot {\epsilon }}} is the strain rate. The stress in this case is determined via torque transducers present in the instrument. The small size of this instrument makes it easy to use and eliminates sample sagging between the rollers. A schematic detailing the operation of the SER extensional rheometer can be found on the right.
Though there are many instruments that test the mechanical and viscoelastic response of materials, broadband viscoelastic spectroscopy (BVS) and resonant ultrasound spectroscopy (RUS) are more commonly used to test viscoelastic behavior because they can be used above and below ambient temperatures and are more specific to testing viscoelasticity. These two instruments employ a damping mechanism at various frequencies and time ranges with no appeal to time–temperature superposition . Using BVS and RUS to study the mechanical properties of materials is important to understanding how a material exhibiting viscoelasticity will perform. [ 33 ]
|
https://en.wikipedia.org/wiki/Viscoelasticity
|
A viscometer (also called viscosimeter ) is an instrument used to measure the viscosity of a fluid . For liquids with viscosities which vary with flow conditions , an instrument called a rheometer is used. Thus, a rheometer can be considered as a special type of viscometer. [ 1 ] Viscometers can measure only constant viscosity, that is, viscosity that does not change with flow conditions.
In general, either the fluid remains stationary and an object moves through it, or the object is stationary and the fluid moves past it. The drag caused by relative motion of the fluid and a surface is a measure of the viscosity. The flow conditions must have a sufficiently small value of Reynolds number for there to be laminar flow .
At 20 °C, the dynamic viscosity (kinematic viscosity × density) of water is 1.0038 mPa·s and its kinematic viscosity (product of flow time × factor) is 1.0022 mm 2 /s. These values are used for calibrating certain types of viscometers.
These devices are also known as glass capillary viscometers or Ostwald viscometers , named after Wilhelm Ostwald . Another version is the Ubbelohde viscometer , which consists of a U-shaped glass tube held vertically in a controlled temperature bath. In one arm of the U is a vertical section of precise narrow bore (the capillary). Above there is a bulb, with it is another bulb lower down on the other arm. In use, liquid is drawn into the upper bulb by suction, then allowed to flow down through the capillary into the lower bulb. Two marks (one above and one below the upper bulb) indicate a known volume. The time taken for the level of the liquid to pass between these marks is proportional to the kinematic viscosity. The calibration can be done using a fluid of known properties. Most commercial units are provided with a conversion factor.
The time required for the test liquid to flow through a capillary of a known diameter of a certain factor between two marked points is measured. By multiplying the time taken by the factor of the viscometer, the kinematic viscosity is obtained.
Such viscometers can be classified as direct-flow or reverse-flow. Reverse-flow viscometers have the reservoir above the markings, and direct-flow are those with the reservoir below the markings. Such classifications exist so that the level can be determined even when opaque or staining liquids are measured, otherwise the liquid will cover the markings and make it impossible to gauge the time the level passes the mark. This also allows the viscometer to have more than 1 set of marks to allow for an immediate timing of the time it takes to reach the 3rd mark [ clarify ] , therefore yielding 2 timings and allowing subsequent calculation of determinability to ensure accurate results. The use of two timings in one viscometer in a single run is only possible if the sample being measured has Newtonian properties . Otherwise the change in driving head, which in turn changes the shear rate, will produce a different viscosity for the two bulbs.
Stokes' law is the basis of the falling-sphere viscometer, in which the fluid is stationary in a vertical glass tube. A sphere of known size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity , which can be measured by the time it takes to pass two marks on the tube. Electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, and the density of the liquid, Stokes' law can be used to calculate the viscosity of the fluid. A series of steel ball bearings of different diameter are normally used in the classic experiment to improve the accuracy of the calculation. The school experiment uses glycerol as the fluid, and the technique is used industrially to check the viscosity of fluids used in processes. It includes many different oils and polymer liquids such as solutions [ clarify ] .
In 1851, George Gabriel Stokes derived an expression for the frictional force (also called drag force ) exerted on spherical objects with very small Reynolds numbers (e.g., very small particles) in a continuous viscous fluid by changing the small fluid-mass limit of the generally unsolvable Navier–Stokes equations :
where
If the particles are falling in the viscous fluid by their own weight, then a terminal velocity, also known as the settling velocity, is reached when this frictional force combined with the buoyant force exactly balance the gravitational force . The resulting settling velocity (or terminal velocity ) is given by
where:
Note that Stokes flow is assumed, so the Reynolds number must be small.
A limiting factor on the validity of this result is the roughness of the sphere being used.
A modification of the straight falling-sphere viscometer is a rolling-ball viscometer, which times a ball rolling down a slope whilst immersed in the test fluid. This can be further improved by using a patented V plate, which increases the number of rotations to distance traveled, allowing smaller, more portable devices. The controlled rolling motion of the ball avoids turbulences in the fluid, which would otherwise occur with a falling ball. [ 2 ] This type of device is also suitable for ship board use. [ why? ]
Also known as the Norcross viscometer after its inventor, Austin Norcross. The principle of viscosity measurement in this rugged and sensitive industrial device is based on a piston and cylinder assembly. The piston is periodically raised by an air lifting mechanism, drawing the material being measured down through the clearance (gap) between the piston and the wall of the cylinder into the space formed below the piston as it is raised. The assembly is then typically held up for a few seconds, then allowed to fall by gravity, expelling the sample out through the same path that it entered, creating a shearing effect on the measured liquid, which makes this viscometer particularly sensitive and good for measuring certain thixotropic liquids. The time of fall is a measure of viscosity, with the clearance between the piston and inside of the cylinder forming the measuring orifice. The viscosity controller measures the time of fall (time-of-fall seconds being the measure of viscosity) and displays the resulting viscosity value. The controller can calibrate the time-of-fall value to cup seconds (known as efflux cup), Saybolt universal second (SUS) or centipoise .
Industrial use is popular due to simplicity, repeatability, low maintenance and longevity. This type of measurement is not affected by flow rate or external vibrations. The principle of operation can be adapted for many different conditions, making it ideal for process control environments.
Sometimes referred to as electromagnetic viscometer or EMV viscometer, was invented at Cambridge Viscosity (Formally Cambridge Applied Systems) in 1986. The sensor (see figure below) comprises a measurement chamber and magnetically influenced piston. Measurements are taken whereby a sample is first introduced into the thermally controlled measurement chamber where the piston resides. Electronics drive the piston into oscillatory motion within the measurement chamber with a controlled magnetic field. A shear stress is imposed on the liquid (or gas) due to the piston travel, and the viscosity is determined by measuring the travel time of the piston. The construction parameters for the annular spacing between the piston and measurement chamber, the strength of the electromagnetic field, and the travel distance of the piston are used to calculate the viscosity according to Newton's law of viscosity .
The oscillating-piston viscometer technology has been adapted for small-sample viscosity and micro-sample viscosity testing in laboratory applications. It has also been adapted to high-pressure viscosity and high-temperature viscosity measurements in both laboratory and process environments. The viscosity sensors have been scaled for a wide range of industrial applications, such as small-size viscometers for use in compressors and engines, flow-through viscometers for dip coating processes, in-line viscometers for use in refineries, and hundreds of other applications. Improvements in sensitivity from modern electronics, is stimulating a growth in oscillating-piston viscometer popularity with academic laboratories exploring gas viscosity.
Vibrational viscometers date back to the 1950s Bendix instrument, which is of a class that operates by measuring the damping of an oscillating electromechanical resonator immersed in a fluid whose viscosity is to be determined. The resonator generally oscillates in torsion or transversely (as a cantilever beam or tuning fork). The higher the viscosity, the larger the damping imposed on the resonator. The resonator's damping may be measured by one of several methods:
The vibrational instrument also suffers from a lack of a defined shear field, which makes it unsuited to measuring the viscosity of a fluid whose flow behaviour is not known beforehand.
Vibrating viscometers are rugged industrial systems used to measure viscosity in the process condition. The active part of the sensor is a vibrating rod. The vibration amplitude varies according to the viscosity of the fluid in which the rod is immersed. These viscosity meters are suitable for measuring clogging fluid and high-viscosity fluids, including those with fibers (up to 1000 Pa·s). Currently, many industries around the world consider these viscometers to be the most efficient system with which to measure the viscosities of a wide range of fluids; by contrast, rotational viscometers require more maintenance, are unable to measure clogging fluid, and require frequent calibration after intensive use. Vibrating viscometers have no moving parts, no weak parts and the sensitive part is typically small. Even very basic or acidic fluids can be measured by adding a protective coating, such as enamel , or by changing the material of the sensor to a material such as 316L stainless steel . Vibrating viscometers are the most widely used inline instrument to monitor the viscosity of the process fluid in tanks, and pipes.
The quartz viscometer is a special type of vibrational viscometer. Here, an oscillating quartz crystal is immersed into a fluid and the specific influence on the oscillating behavior defines the viscosity. The principle of quartz viscosimetry is based on the idea of W. P. Mason. The basic concept is the application of a piezoelectric crystal for the determination of viscosity. The high-frequency electric field that is applied to the oscillator causes a movement of the sensor and results in the shearing of the fluid. The movement of the sensor is then influenced by the external forces (the shear stress) of the fluid, which affects the electrical response of the sensor. [ 3 ] The calibration procedure as a pre-condition of viscosity determination by means of a quartz crystal goes back to B. Bode, who facilitated the detailed analysis of the electrical and mechanical transmission behavior of the oscillating system. [ 4 ] On the basis of this calibration, the quartz viscosimeter was developed which allows continuous viscosity determination in resting and flowing liquids. [ 5 ]
The quartz crystal microbalance functions as a vibrational viscometer by the piezoelectric properties inherent in quartz to perform measurements of conductance spectra of liquids and thin films exposed to the surface of the crystal. [ 6 ] From these spectra, frequency shifts and a broadening of the peaks for the resonant and overtone frequencies of the quartz crystal are tracked and used to determine changes in mass as well as the viscosity , shear modulus , and other viscoelastic properties of the liquid or thin film. One benefit of using the quartz crystal microbalance to measure viscosity is the small amount of sample required for obtaining an accurate measurement. However, due to the dependence viscoelastic properties on the sample preparation techniques and thickness of the film or bulk liquid, there can be errors up to 10% in measurements in viscosity between samples. [ 6 ]
An interesting technique to measure the viscosity of a liquid using a quartz crystal microbalance which improves the consistency of measurements uses a drop method. [ 7 ] [ 8 ] Instead of creating a thin film or submerging the quartz crystal in a liquid, a single drop of the fluid of interest is dropped on the surface of the crystal. The viscosity is extracted from the shift in the frequency data using the following equation
Δ f = − f 0 3 / 2 η l ρ l π μ Q ρ Q {\displaystyle \Delta f=-f_{0}^{3/2}{\sqrt {\frac {\eta _{l}\rho _{l}}{\pi \mu _{Q}\rho _{Q}}}}}
where f 0 {\displaystyle f_{0}} is the resonant frequency, ρ l {\displaystyle \rho _{l}} is the density of the fluid, μ Q {\displaystyle \mu _{Q}} is the shear modulus of the quartz, and ρ Q {\displaystyle \rho _{Q}} is the density of the quartz. [ 8 ] An extension of this technique corrects the shift in the resonant frequency by the size of the drop deposited on the quartz crystal. [ 7 ]
Rotational viscometers use the idea that the torque required to rotate an object in a fluid is a function of the viscosity of that fluid. They measure the torque required to rotate a disk or bob in a fluid at a known speed.
"Cup and bob" viscometers work by defining the exact volume of a sample to be sheared within a test cell; the torque required to achieve a certain rotational speed is measured and plotted. There are two classical geometries in "cup and bob" viscometers, known as either the "Couette" or "Searle" systems, distinguished by whether the cup or bob rotates. The rotating cup is preferred in some cases because it reduces the onset of Taylor vortices at very high shear rates, but the rotating bob is more commonly used, as the instrument design can be more flexible for other geometries as well.
"Cone and plate" viscometers use a narrow-angled cone in close proximity to a flat plate. With this system, the shear rate between the geometries is constant at any given rotational speed. The viscosity can easily be calculated from shear stress (from the torque) and shear rate (from the angular velocity).
If a test with any geometries runs through a table of several shear rates or stresses, the data can be used to plot a flow curve, that is a graph of viscosity vs shear rate. If the above test is carried out slowly enough for the measured value (shear stress if rate is being controlled, or conversely) to reach a steady value at each step, the data is said to be at "equilibrium", and the graph is then an "equilibrium flow curve". This is preferable over non-equilibrium measurements, as the data can usually be replicated across multiple other instruments or with other geometries.
Rheometers and viscometers work with torque and angular velocity. Since viscosity is normally considered in terms of shear stress and shear rates, a method is needed to convert from "instrument numbers" to "rheology numbers". Each measuring system used in an instrument has its associated "form factors" to convert torque to shear stress and to convert angular velocity to shear rate.
We will call the shear stress form factor C 1 and the shear rate factor C 2 .
The following sections show how the form factors are calculated for each measuring system.
where
where r is the radius of the plate.
Note: The shear stress varies across the radius for a parallel plate. The above formula refers to the 3/4 radius position if the test sample is Newtonian.
where:
Note: C 1 takes the shear stress as that occurring at an average radius r a .
The EMS viscometer measures the viscosity of liquids through observation of the rotation of a sphere driven by electromagnetic interaction: Two magnets attached to a rotor create a rotating magnetic field. The sample ③ to be measured is in a small test tube ②. Inside the tube is an aluminium sphere ④. The tube is located in a temperature-controlled chamber ① and set such that the sphere is situated in the centre of the two magnets.
The rotating magnetic field induces eddy currents in the sphere. The resulting Lorentz interaction between the magnetic field and these eddy currents generate torque that rotates the sphere. The rotational speed of the sphere depends on the rotational velocity of the magnetic field, the magnitude of the magnetic field and the viscosity of the sample around the sphere. The motion of the sphere is monitored by a video camera ⑤ located below the cell. The torque applied to the sphere is proportional to the difference in the angular velocity of the magnetic field Ω B and that of the sphere Ω S . There is thus a linear relationship between ( Ω B − Ω S )/ Ω S and the viscosity of the liquid.
This new measuring principle was developed by Sakai et al. at the University of Tokyo. The EMS viscometer distinguishes itself from other rotational viscometers by three main characteristics:
By modifying the classic Couette-type rotational viscometer, it is possible to combine the accuracy of kinematic viscosity determination with a wide measuring range.
The outer cylinder of the Stabinger viscometer is a sample-filled tube that rotates at constant speed in a temperature-controlled copper housing. The hollow internal cylinder – shaped as a conical rotor – is centered within the sample by hydrodynamic lubrication [ 9 ] effects and centrifugal forces . In this way all bearing friction , an inevitable factor in most rotational devices, is fully avoided. The rotating fluid's shear forces drive the rotor, while a magnet inside the rotor forms an eddy current brake with the surrounding copper housing. An equilibrium rotor speed is established between driving and retarding forces, which is an unambiguous measure of the dynamic viscosity. The speed and torque measurement is implemented without direct contact by a Hall-effect sensor counting the frequency of the rotating magnetic field . This allows a highly precise torque resolution of 50 pN·m and a wide measuring range from 0.2 to 30,000 mPa·s with a single measuring system. A built-in density measurement based on the oscillating U-tube principle allows the determination of kinematic viscosity from the measured dynamic viscosity employing the relation
where:
Bubble viscometers are used to quickly determine kinematic viscosity of known liquids such as resins and varnishes. The time required for an air bubble to rise is directly proportional to the viscosity of the liquid, so the faster the bubble rises, the lower the viscosity. The alphabetical-comparison method uses 4 sets of lettered reference tubes, A5 through Z10, of known viscosity to cover a viscosity range from 0.005 to 1,000 stokes . The direct-time method uses a single 3-line times tube for determining the "bubble seconds", which may then be converted to stokes. [ 10 ]
This method is considerably accurate, but the measurements can vary due to variances in buoyancy because of the changing in shape of the bubble in the tube. [ 10 ] However, this does not cause any sort of serious miscalculation.
The basic design of a rectangular-slit viscometer/rheometer consists of a rectangular-slit channel with uniform cross-sectional area. A test liquid is pumped at a constant flow rate through this channel. Multiple pressure sensors flush-mounted at linear distances along the stream-wise direction measure pressure drop as depicted in the figure:
[ 11 ]
Measuring principle: The slit viscometer/rheometer is based on the fundamental principle that a viscous liquid resists flow, exhibiting a decreasing pressure along the length of the slit. The pressure decrease or drop ( ∆ P ) is correlated with the shear stress at the wall boundary. The apparent shear rate is directly related to the flow rate and the dimension of the slit. The apparent shear rate, the shear stress, and the apparent viscosity are calculated:
where
To determine the viscosity of a liquid, the liquid sample is pumped through the slit channel at a constant flow rate, and the pressure drop is measured. Following these equations, the apparent viscosity is calculated for the apparent shear rate. For a Newtonian liquid, the apparent viscosity is the same as the true viscosity, and the single shear-rate measurement is sufficient. For non-Newtonian liquids, the apparent viscosity is not true viscosity. In order to obtain true viscosity, the apparent viscosities are measured at multiple apparent shear rates. Then true viscosities η at various shear rates are calculated using Weissenberg–Rabinowitsch–Mooney correction factor: [ 12 ]
The calculated true viscosity is the same as the cone and plate values at the same shear rate.
A modified version of the rectangular-slit viscometer/rheometer can also be used to determine apparent extensional viscosity .
The Krebs viscometer uses a digital graph and a small sidearm spindle to measure the viscosity of a fluid. It is mostly used in the paint industry.
Other viscometer types use balls or other objects. Viscometers that can characterize non-Newtonian fluids are usually called rheometers or plastometers . Some instruments like capillary or VROC® viscometers can measure both Newtonian and non-Newtonian fluids. [ 13 ] [ 14 ]
In the I.C.I "Oscar" viscometer, a sealed can of fluid was oscillated torsionally, and by clever measurement techniques it was possible to measure both viscosity and elasticity in the sample.
The Marsh funnel viscometer measures viscosity from the time ( efflux time ) it takes a known volume of liquid to flow from the base of a cone through a short tube. This is similar in principle to the flow cups (efflux cups) like the Ford , Zahn and Shell cups which use different shapes to the cone and various nozzle sizes. The measurements can be done according to ISO 2431, ASTM D1200 - 10 or DIN 53411. [ 15 ]
The flexible-blade rheometer improves the accuracy of measurements for the lower-viscosity liquids utilizing the subtle changes in the flow field due to the flexibility of the moving or stationary blade (sometimes called wing or single-side-clamped cantilever).
A rotating disk viscometer is the standard viscometer for measuring material viscosity and scorch time for rubber before vulcanization.
|
https://en.wikipedia.org/wiki/Viscometer
|
Viscosity is a measure of a fluid 's rate-dependent resistance to a change in shape or to movement of its neighboring portions relative to one another. [ 1 ] For liquids, it corresponds to the informal concept of thickness ; for example, syrup has a higher viscosity than water . [ 2 ] Viscosity is defined scientifically as a force multiplied by a time divided by an area. Thus its SI units are newton-seconds per metre squared, or pascal-seconds. [ 1 ]
Viscosity quantifies the internal frictional force between adjacent layers of fluid that are in relative motion. [ 1 ] For instance, when a viscous fluid is forced through a tube, it flows more quickly near the tube's center line than near its walls. [ 3 ] Experiments show that some stress (such as a pressure difference between the two ends of the tube) is needed to sustain the flow. This is because a force is required to overcome the friction between the layers of the fluid which are in relative motion . For a tube with a constant rate of flow, the strength of the compensating force is proportional to the fluid's viscosity.
In general, viscosity depends on a fluid's state, such as its temperature, pressure, and rate of deformation. However, the dependence on some of these properties is negligible in certain cases. For example, the viscosity of a Newtonian fluid does not vary significantly with the rate of deformation.
Zero viscosity (no resistance to shear stress ) is observed only at very low temperatures in superfluids ; otherwise, the second law of thermodynamics requires all fluids to have positive viscosity. [ 4 ] [ 5 ] A fluid that has zero viscosity (non-viscous) is called ideal or inviscid .
For non-Newtonian fluids ' viscosity, there are pseudoplastic , plastic , and dilatant flows that are time-independent, and there are thixotropic and rheopectic flows that are time-dependent.
The word "viscosity" is derived from the Latin viscum (" mistletoe "). Viscum also referred to a viscous glue derived from mistletoe berries. [ 6 ]
In materials science and engineering , there is often interest in understanding the forces or stresses involved in the deformation of a material. For instance, if the material were a simple spring, the answer would be given by Hooke's law , which says that the force experienced by a spring is proportional to the distance displaced from equilibrium. Stresses which can be attributed to the deformation of a material from some rest state are called elastic stresses. In other materials, stresses are present which can be attributed to the deformation rate over time . These are called viscous stresses. For instance, in a fluid such as water the stresses which arise from shearing the fluid do not depend on the distance the fluid has been sheared; rather, they depend on how quickly the shearing occurs.
Viscosity is the material property which relates the viscous stresses in a material to the rate of change of a deformation (the strain rate). Although it applies to general flows, it is easy to visualize and define in a simple shearing flow, such as a planar Couette flow .
In the Couette flow, a fluid is trapped between two infinitely large plates, one fixed and one in parallel motion at constant speed u {\displaystyle u} (see illustration to the right). If the speed of the top plate is low enough (to avoid turbulence), then in steady state the fluid particles move parallel to it, and their speed varies from 0 {\displaystyle 0} at the bottom to u {\displaystyle u} at the top. [ 7 ] Each layer of fluid moves faster than the one just below it, and friction between them gives rise to a force resisting their relative motion. In particular, the fluid applies on the top plate a force in the direction opposite to its motion, and an equal but opposite force on the bottom plate. An external force is therefore required in order to keep the top plate moving at constant speed.
In many fluids, the flow velocity is observed to vary linearly from zero at the bottom to u {\displaystyle u} at the top. Moreover, the magnitude of the force, F {\displaystyle F} , acting on the top plate is found to be proportional to the speed u {\displaystyle u} and the area A {\displaystyle A} of each plate, and inversely proportional to their separation y {\displaystyle y} :
The proportionality factor is the dynamic viscosity of the fluid, often simply referred to as the viscosity . It is denoted by the Greek letter mu ( μ ). The dynamic viscosity has the dimensions ( m a s s / l e n g t h ) / t i m e {\displaystyle \mathrm {(mass/length)/time} } , therefore resulting in the SI units and the derived units :
The aforementioned ratio u / y {\displaystyle u/y} is called the rate of shear deformation or shear velocity , and is the derivative of the fluid speed in the direction parallel to the normal vector of the plates (see illustrations to the right). If the velocity does not vary linearly with y {\displaystyle y} , then the appropriate generalization is:
where τ = F / A {\displaystyle \tau =F/A} , and ∂ u / ∂ y {\displaystyle \partial u/\partial y} is the local shear velocity. This expression is referred to as Newton's law of viscosity . In shearing flows with planar symmetry, it is what defines μ {\displaystyle \mu } . It is a special case of the general definition of viscosity (see below), which can be expressed in coordinate-free form.
Use of the Greek letter mu ( μ {\displaystyle \mu } ) for the dynamic viscosity (sometimes also called the absolute viscosity ) is common among mechanical and chemical engineers , as well as mathematicians and physicists. [ 8 ] [ 9 ] [ 10 ] However, the Greek letter eta ( η {\displaystyle \eta } ) is also used by chemists, physicists, and the IUPAC . [ 11 ] The viscosity μ {\displaystyle \mu } is sometimes also called the shear viscosity . However, at least one author discourages the use of this terminology, noting that μ {\displaystyle \mu } can appear in non-shearing flows in addition to shearing flows. [ 12 ]
In fluid dynamics, it is sometimes more appropriate to work in terms of kinematic viscosity (sometimes also called the momentum diffusivity ), defined as the ratio of the dynamic viscosity ( μ ) over the density of the fluid ( ρ ). It is usually denoted by the Greek letter nu ( ν ):
and has the dimensions ( l e n g t h ) 2 / t i m e {\displaystyle \mathrm {(length)^{2}/time} } , therefore resulting in the SI units and the derived units :
In very general terms, the viscous stresses in a fluid are defined as those resulting from the relative velocity of different fluid particles. As such, the viscous stresses must depend on spatial gradients of the flow velocity. If the velocity gradients are small, then to a first approximation the viscous stresses depend only on the first derivatives of the velocity. [ 13 ] (For Newtonian fluids, this is also a linear dependence.) In Cartesian coordinates, the general relationship can then be written as
where μ i j k ℓ {\displaystyle \mu _{ijk\ell }} is a viscosity tensor that maps the velocity gradient tensor ∂ v k / ∂ r ℓ {\displaystyle \partial v_{k}/\partial r_{\ell }} onto the viscous stress tensor τ i j {\displaystyle \tau _{ij}} . [ 14 ] Since the indices in this expression can vary from 1 to 3, there are 81 "viscosity coefficients" μ i j k l {\displaystyle \mu _{ijkl}} in total. However, assuming that the viscosity rank-2 tensor is isotropic reduces these 81 coefficients to three independent parameters α {\displaystyle \alpha } , β {\displaystyle \beta } , γ {\displaystyle \gamma } :
and furthermore, it is assumed that no viscous forces may arise when the fluid is undergoing simple rigid-body rotation, thus β = γ {\displaystyle \beta =\gamma } , leaving only two independent parameters. [ 13 ] The most usual decomposition is in terms of the standard (scalar) viscosity μ {\displaystyle \mu } and the bulk viscosity κ {\displaystyle \kappa } such that α = κ − 2 3 μ {\displaystyle \alpha =\kappa -{\tfrac {2}{3}}\mu } and β = γ = μ {\displaystyle \beta =\gamma =\mu } . In vector notation this appears as:
where δ {\displaystyle \mathbf {\delta } } is the unit tensor. [ 12 ] [ 15 ] This equation can be thought of as a generalized form of Newton's law of viscosity.
The bulk viscosity (also called volume viscosity) expresses a type of internal friction that resists the shearless compression or expansion of a fluid. Knowledge of κ {\displaystyle \kappa } is frequently not necessary in fluid dynamics problems. For example, an incompressible fluid satisfies ∇ ⋅ v = 0 {\displaystyle \nabla \cdot \mathbf {v} =0} and so the term containing κ {\displaystyle \kappa } drops out. Moreover, κ {\displaystyle \kappa } is often assumed to be negligible for gases since it is 0 {\displaystyle 0} in a monatomic ideal gas . [ 12 ] One situation in which κ {\displaystyle \kappa } can be important is the calculation of energy loss in sound and shock waves , described by Stokes' law of sound attenuation , since these phenomena involve rapid expansions and compressions.
The defining equations for viscosity are not fundamental laws of nature, so their usefulness, as well as methods for measuring or calculating the viscosity, must be established using separate means. A potential issue is that viscosity depends, in principle, on the full microscopic state of the fluid, which encompasses the positions and momenta of every particle in the system. [ 16 ] Such highly detailed information is typically not available in realistic systems. However, under certain conditions most of this information can be shown to be negligible. In particular, for Newtonian fluids near equilibrium and far from boundaries (bulk state), the viscosity depends only space- and time-dependent macroscopic fields (such as temperature and density) defining local equilibrium. [ 16 ] [ 17 ]
Nevertheless, viscosity may still carry a non-negligible dependence on several system properties, such as temperature, pressure, and the amplitude and frequency of any external forcing. Therefore, precision measurements of viscosity are only defined
with respect to a specific fluid state. [ 18 ] To standardize comparisons among experiments and theoretical models, viscosity data is sometimes extrapolated to ideal limiting cases, such as the zero shear limit, or (for gases) the zero density limit.
Transport theory provides an alternative interpretation of viscosity in terms of momentum transport: viscosity is the material property which characterizes momentum transport within a fluid, just as thermal conductivity characterizes heat transport, and (mass) diffusivity characterizes mass transport. [ 19 ] This perspective is implicit in Newton's law of viscosity, τ = μ ( ∂ u / ∂ y ) {\displaystyle \tau =\mu (\partial u/\partial y)} , because the shear stress τ {\displaystyle \tau } has units equivalent to a momentum flux , i.e., momentum per unit time per unit area. Thus, τ {\displaystyle \tau } can be interpreted as specifying the flow of momentum in the y {\displaystyle y} direction from one fluid layer to the next. Per Newton's law of viscosity, this momentum flow occurs across a velocity gradient, and the magnitude of the corresponding momentum flux is determined by the viscosity.
The analogy with heat and mass transfer can be made explicit. Just as heat flows from high temperature to low temperature and mass flows from high density to low density, momentum flows from high velocity to low velocity. These behaviors are all described by compact expressions, called constitutive relations , whose one-dimensional forms are given here:
where ρ {\displaystyle \rho } is the density, J {\displaystyle \mathbf {J} } and q {\displaystyle \mathbf {q} } are the mass and heat fluxes, and D {\displaystyle D} and k t {\displaystyle k_{t}} are the mass diffusivity and thermal conductivity. [ 20 ] The fact that mass, momentum, and energy (heat) transport are among the most relevant processes in continuum mechanics is not a coincidence: these are among the few physical quantities that are conserved at the microscopic level in interparticle collisions. Thus, rather than being dictated by the fast and complex microscopic interaction timescale, their dynamics occurs on macroscopic timescales, as described by the various equations of transport theory and hydrodynamics.
Newton's law of viscosity is not a fundamental law of nature, but rather a constitutive equation (like Hooke's law , Fick's law , and Ohm's law ) which serves to define the viscosity μ {\displaystyle \mu } . Its form is motivated by experiments which show that for a wide range of fluids, μ {\displaystyle \mu } is independent of strain rate. Such fluids are called Newtonian . Gases , water , and many common liquids can be considered Newtonian in ordinary conditions and contexts. However, there are many non-Newtonian fluids that significantly deviate from this behavior. For example:
Trouton 's ratio is the ratio of extensional viscosity to shear viscosity . For a Newtonian fluid, the Trouton ratio is 3. [ 21 ] [ 22 ] Shear-thinning liquids are very commonly, but misleadingly, described as thixotropic. [ 23 ]
Viscosity may also depend on the fluid's physical state (temperature and pressure) and other, external , factors. For gases and other compressible fluids , it depends on temperature and varies very slowly with pressure. The viscosity of some fluids may depend on other factors. A magnetorheological fluid , for example, becomes thicker when subjected to a magnetic field , possibly to the point of behaving like a solid.
The viscous forces that arise during fluid flow are distinct from the elastic forces that occur in a solid in response to shear, compression, or extension stresses. While in the latter the stress is proportional to the amount of shear deformation, in a fluid it is proportional to the rate of deformation over time. For this reason, James Clerk Maxwell used the term fugitive elasticity for fluid viscosity.
However, many liquids (including water) will briefly react like elastic solids when subjected to sudden stress. Conversely, many "solids" (even granite ) will flow like liquids, albeit very slowly, even under arbitrarily small stress. [ 24 ] Such materials are best described as viscoelastic —that is, possessing both elasticity (reaction to deformation) and viscosity (reaction to rate of deformation).
Viscoelastic solids may exhibit both shear viscosity and bulk viscosity. The extensional viscosity is a linear combination of the shear and bulk viscosities that describes the reaction of a solid elastic material to elongation. It is widely used for characterizing polymers.
In geology , earth materials that exhibit viscous deformation at least three orders of magnitude greater than their elastic deformation are sometimes called rheids . [ 25 ]
Viscosity is measured with various types of viscometers and rheometers . Close temperature control of the fluid is essential to obtain accurate measurements, particularly in materials like lubricants, whose viscosity can double with a change of only 5 °C. A rheometer is used for fluids that cannot be defined by a single value of viscosity and therefore require more parameters to be set and measured than is the case for a viscometer. [ 26 ]
For some fluids, the viscosity is constant over a wide range of shear rates ( Newtonian fluids ). The fluids without a constant viscosity ( non-Newtonian fluids ) cannot be described by a single number. Non-Newtonian fluids exhibit a variety of different correlations between shear stress and shear rate.
One of the most common instruments for measuring kinematic viscosity is the glass capillary viscometer.
In coating industries, viscosity may be measured with a cup in which the efflux time is measured. There are several sorts of cup—such as the Zahn cup and the Ford viscosity cup —with the usage of each type varying mainly according to the industry.
Also used in coatings, a Stormer viscometer employs load-based rotation to determine viscosity. The viscosity is reported in Krebs units (KU), which are unique to Stormer viscometers.
Vibrating viscometers can also be used to measure viscosity. Resonant, or vibrational viscometers work by creating shear waves within the liquid. In this method, the sensor is submerged in the fluid and is made to resonate at a specific frequency. As the surface of the sensor shears through the liquid, energy is lost due to its viscosity. This dissipated energy is then measured and converted into a viscosity reading. A higher viscosity causes a greater loss of energy. [ citation needed ]
Extensional viscosity can be measured with various rheometers that apply extensional stress .
Volume viscosity can be measured with an acoustic rheometer .
Apparent viscosity is a calculation derived from tests performed on drilling fluid used in oil or gas well development. These calculations and tests help engineers develop and maintain the properties of the drilling fluid to the specifications required.
Nanoviscosity (viscosity sensed by nanoprobes) can be measured by fluorescence correlation spectroscopy . [ 27 ]
The SI unit of dynamic viscosity is the newton -second per metre squared (N·s/m 2 ), also frequently expressed in the equivalent forms pascal - second (Pa·s), kilogram per meter per second (kg·m −1 ·s −1 ) and poiseuille (Pl). The CGS unit is the poise (P, or g·cm −1 ·s −1 = 0.1 Pa·s), [ 28 ] named after Jean Léonard Marie Poiseuille . It is commonly expressed, particularly in ASTM standards, as centipoise (cP). The centipoise is convenient because the viscosity of water at 20 °C is about 1 cP, and one centipoise is equal to the SI millipascal second (mPa·s).
The SI unit of kinematic viscosity is metre squared per second (m 2 /s), whereas the CGS unit for kinematic viscosity is the stokes (St, or cm 2 ·s −1 = 0.0001 m 2 ·s −1 ), named after Sir George Gabriel Stokes . [ 29 ] In U.S. usage, stoke is sometimes used as the singular form. The submultiple centistokes (cSt) is often used instead, 1 cSt = 1 mm 2 ·s −1 = 10 −6 m 2 ·s −1 . 1 cSt is 1 cP divided by 1000 kg/m^3, close to the density of water. The kinematic viscosity of water at 20 °C is about 1 cSt.
The most frequently used systems of US customary, or Imperial , units are the British Gravitational (BG) and English Engineering (EE). In the BG system, dynamic viscosity has units of pound -seconds per square foot (lb·s/ft 2 ), and in the EE system it has units of pound-force -seconds per square foot (lbf·s/ft 2 ). The pound and pound-force are equivalent; the two systems differ only in how force and mass are defined. In the BG system the pound is a basic unit from which the unit of mass (the slug ) is defined by Newton's second law , whereas in the EE system the units of force and mass (the pound-force and pound-mass respectively) are defined independently through the second law using the proportionality constant g c .
Kinematic viscosity has units of square feet per second (ft 2 /s) in both the BG and EE systems.
Nonstandard units include the reyn (lbf·s/in 2 ), a British unit of dynamic viscosity. [ 30 ] In the automotive industry the viscosity index is used to describe the change of viscosity with temperature.
The reciprocal of viscosity is fluidity , usually symbolized by ϕ = 1 / μ {\displaystyle \phi =1/\mu } or F = 1 / μ {\displaystyle F=1/\mu } , depending on the convention used, measured in reciprocal poise (P −1 , or cm · s · g −1 ), sometimes called the rhe . Fluidity is seldom used in engineering practice. [ citation needed ]
At one time the petroleum industry relied on measuring kinematic viscosity by means of the Saybolt viscometer , and expressing kinematic viscosity in units of Saybolt universal seconds (SUS). [ 31 ] Other abbreviations such as SSU ( Saybolt seconds universal ) or SUV ( Saybolt universal viscosity ) are sometimes used. Kinematic viscosity in centistokes can be converted from SUS according to the arithmetic and the reference table provided in ASTM D 2161.
Momentum transport in gases is mediated by discrete molecular collisions, and in liquids by attractive forces that bind molecules close together. [ 19 ] Because of this, the dynamic viscosities of liquids are typically much larger than those of gases. In addition, viscosity tends to increase with temperature in gases and decrease with temperature in liquids.
Above the liquid-gas critical point , the liquid and gas phases are replaced by a single supercritical phase . In this regime, the mechanisms of momentum transport interpolate between liquid-like and gas-like behavior.
For example, along a supercritical isobar (constant-pressure surface), the kinematic viscosity decreases at low temperature and increases at high temperature, with a minimum in between. [ 32 ] [ 33 ] A rough estimate for the value
at the minimum is
where ℏ {\displaystyle \hbar } is the Planck constant , m e {\displaystyle m_{\text{e}}} is the electron mass , and m {\displaystyle m} is the molecular mass. [ 33 ]
In general, however, the viscosity of a system depends in detail on how the molecules constituting the system interact, and there are no simple but correct formulas for it. The simplest exact expressions are the Green–Kubo relations for the linear shear viscosity or the transient time correlation function expressions derived by Evans and Morriss in 1988. [ 34 ] Although these expressions are each exact, calculating the viscosity of a dense fluid using these relations currently requires the use of molecular dynamics computer simulations. Somewhat more progress can be made for a dilute gas, as elementary assumptions about how gas molecules move and interact lead to a basic understanding of the molecular origins of viscosity. More sophisticated treatments can be constructed by systematically coarse-graining the equations of motion of the gas molecules. An example of such a treatment is Chapman–Enskog theory , which derives expressions for the viscosity of a dilute gas from the Boltzmann equation . [ 17 ]
Consider a dilute gas moving parallel to the x {\displaystyle x} -axis with velocity u ( y ) {\displaystyle u(y)} that depends only on the y {\displaystyle y} coordinate. To simplify the discussion, the gas is assumed to have uniform temperature and density.
Under these assumptions, the x {\displaystyle x} velocity of a molecule passing through y = 0 {\displaystyle y=0} is equal to whatever velocity that molecule had when its mean free path λ {\displaystyle \lambda } began. Because λ {\displaystyle \lambda } is typically small compared with macroscopic scales, the average x {\displaystyle x} velocity of such a molecule has the form
where α {\displaystyle \alpha } is a numerical constant on the order of 1 {\displaystyle 1} . (Some authors estimate α = 2 / 3 {\displaystyle \alpha =2/3} ; [ 19 ] [ 35 ] on the other hand, a more careful calculation for rigid elastic spheres gives α ≃ 0.998 {\displaystyle \alpha \simeq 0.998} .) Next, because half the molecules on either side are moving towards y = 0 {\displaystyle y=0} , and doing so on average with half the average molecular speed ( 8 k B T / π m ) 1 / 2 {\displaystyle (8k_{\text{B}}T/\pi m)^{1/2}} , the momentum flux from either side is
The net momentum flux at y = 0 {\displaystyle y=0} is the difference of the two:
According to the definition of viscosity, this momentum flux should be equal to − μ d u d y ( 0 ) {\displaystyle -\mu {\frac {du}{dy}}(0)} , which leads to
Viscosity in gases arises principally from the molecular diffusion that transports momentum between layers of flow. An elementary calculation for a dilute gas at temperature T {\displaystyle T} and density ρ {\displaystyle \rho } gives
where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant , m {\displaystyle m} the molecular mass, and α {\displaystyle \alpha } a numerical constant on the order of 1 {\displaystyle 1} . The quantity λ {\displaystyle \lambda } , the mean free path , measures the average distance a molecule travels between collisions. Even without a priori knowledge of α {\displaystyle \alpha } , this expression has nontrivial implications. In particular, since λ {\displaystyle \lambda } is typically inversely proportional to density and increases with temperature, μ {\displaystyle \mu } itself should increase with temperature and be independent of density at fixed temperature. In fact, both of these predictions persist in more sophisticated treatments, and accurately describe experimental observations. By contrast, liquid viscosity typically decreases with temperature. [ 19 ] [ 35 ]
For rigid elastic spheres of diameter σ {\displaystyle \sigma } , λ {\displaystyle \lambda } can be computed, giving
In this case λ {\displaystyle \lambda } is independent of temperature, so μ ∝ T 1 / 2 {\displaystyle \mu \propto T^{1/2}} . For more complicated molecular models, however, λ {\displaystyle \lambda } depends on temperature in a non-trivial way, and simple kinetic arguments as used here are inadequate. More fundamentally, the notion of a mean free path becomes imprecise for particles that interact over a finite range, which limits the usefulness of the concept for describing real-world gases. [ 36 ]
A technique developed by Sydney Chapman and David Enskog in the early 1900s allows a more refined calculation of μ {\displaystyle \mu } . [ 17 ] It is based on the Boltzmann equation , which provides a statistical description of a dilute gas in terms of intermolecular interactions. [ 37 ] The technique allows accurate calculation of μ {\displaystyle \mu } for molecular models that are more realistic than rigid elastic spheres, such as those incorporating intermolecular attractions. Doing so is necessary to reproduce the correct temperature dependence of μ {\displaystyle \mu } , which experiments show increases more rapidly than the T 1 / 2 {\displaystyle T^{1/2}} trend predicted for rigid elastic spheres. [ 19 ] Indeed, the Chapman–Enskog analysis shows that the predicted temperature dependence can be tuned by varying the parameters in various molecular models. A simple example is the Sutherland model, [ a ] which describes rigid elastic spheres with weak mutual attraction. In such a case, the attractive force can be treated perturbatively , which leads to a simple expression for μ {\displaystyle \mu } :
where S {\displaystyle S} is independent of temperature, being determined only by the parameters of the intermolecular attraction. To connect with experiment, it is convenient to rewrite as
where μ 0 {\displaystyle \mu _{0}} is the viscosity at temperature T 0 {\displaystyle T_{0}} . This expression is usually named Sutherland's formula. [ 38 ] If μ {\displaystyle \mu } is known from experiments at T = T 0 {\displaystyle T=T_{0}} and at least one other temperature, then S {\displaystyle S} can be calculated. Expressions for μ {\displaystyle \mu } obtained in this way are qualitatively accurate for a number of simple gases. Slightly more sophisticated models, such as the Lennard-Jones potential , or the more flexible Mie potential , may provide better agreement with experiments, but only at the cost of a more opaque dependence on temperature. A further advantage of these more complex interaction potentials is that they can be used to develop accurate models for a wide variety of properties using the same potential parameters. In situations where little experimental data is available, this makes it possible to obtain model parameters from fitting to properties such as pure-fluid vapour-liquid equilibria , before using the parameters thus obtained to predict the viscosities of interest with reasonable accuracy.
In some systems, the assumption of spherical symmetry must be abandoned, as is the case for vapors with highly polar molecules like H 2 O . In these cases, the Chapman–Enskog analysis is significantly more complicated. [ 39 ] [ 40 ]
In the kinetic-molecular picture, a non-zero bulk viscosity arises in gases whenever there are non-negligible relaxational timescales governing the exchange of energy between the translational energy of molecules and their internal energy, e.g. rotational and vibrational . As such, the bulk viscosity is 0 {\displaystyle 0} for a monatomic ideal gas, in which the internal energy of molecules is negligible, but is nonzero for a gas like carbon dioxide , whose molecules possess both rotational and vibrational energy. [ 41 ] [ 42 ]
In contrast with gases, there is no simple yet accurate picture for the molecular origins of viscosity in liquids.
At the simplest level of description, the relative motion of adjacent layers in a liquid is opposed primarily by attractive molecular forces
acting across the layer boundary. In this picture, one (correctly) expects viscosity to decrease with increasing temperature. This is because
increasing temperature increases the random thermal motion of the molecules, which makes it easier for them to overcome their attractive interactions. [ 43 ]
Building on this visualization, a simple theory can be constructed in analogy with the discrete structure of a solid: groups of molecules in a liquid
are visualized as forming "cages" which surround and enclose single molecules. [ 44 ] These cages can be occupied or unoccupied, and
stronger molecular attraction corresponds to stronger cages.
Due to random thermal motion, a molecule "hops" between cages at a rate which varies inversely with the strength of molecular attractions. In equilibrium these "hops" are not biased in any direction.
On the other hand, in order for two adjacent layers to move relative to each other, the "hops" must be biased in the direction
of the relative motion. The force required to sustain this directed motion can be estimated for a given shear rate, leading to
where N A {\displaystyle N_{\text{A}}} is the Avogadro constant , h {\displaystyle h} is the Planck constant , V {\displaystyle V} is the volume of a mole of liquid, and T b {\displaystyle T_{\text{b}}} is the normal boiling point . This result has the same form as the well-known empirical relation
where A {\displaystyle A} and B {\displaystyle B} are constants fit from data. [ 44 ] [ 45 ] On the other hand, several authors express caution with respect to this model.
Errors as large as 30% can be encountered using equation ( 1 ), compared with fitting equation ( 2 ) to experimental data. [ 44 ] More fundamentally, the physical assumptions underlying equation ( 1 ) have been criticized. [ 46 ] It has also been argued that the exponential dependence in equation ( 1 ) does not necessarily describe experimental observations more accurately than simpler, non-exponential expressions. [ 47 ] [ 48 ]
In light of these shortcomings, the development of a less ad hoc model is a matter of practical interest. Foregoing simplicity in favor of precision, it is possible to write rigorous expressions for viscosity starting from the fundamental equations of motion for molecules. A classic example of this approach is Irving–Kirkwood theory. [ 49 ] On the other hand, such expressions are given as averages over multiparticle correlation functions and are therefore difficult to apply in practice.
In general, empirically derived expressions (based on existing viscosity measurements) appear to be the only consistently reliable means of calculating viscosity in liquids. [ 50 ]
Local atomic structure changes observed in undercooled liquids on cooling below the equilibrium melting temperature either in terms of radial distribution function g ( r ) [ 51 ] or structure factor S ( Q ) [ 52 ] are found to be directly responsible for the liquid fragility: deviation of the temperature dependence of viscosity of the undercooled liquid from the Arrhenius equation (2) through modification of the activation energy for viscous flow. At the same time equilibrium liquids follow the Arrhenius equation.
The same molecular-kinetic picture of a single component gas can also be applied to a gaseous mixture. For instance, in the Chapman–Enskog approach the viscosity μ mix {\displaystyle \mu _{\text{mix}}} of a binary mixture of gases can be written in terms of the individual component viscosities μ 1 , 2 {\displaystyle \mu _{1,2}} , their respective volume fractions, and the intermolecular interactions. [ 17 ]
As for the single-component gas, the dependence of μ mix {\displaystyle \mu _{\text{mix}}} on the parameters of the intermolecular interactions enters through various collisional integrals which may not be expressible in closed form . To obtain usable expressions for μ mix {\displaystyle \mu _{\text{mix}}} which reasonably match experimental data, the collisional integrals may be computed numerically or from correlations. [ 53 ] In some cases, the collision integrals are regarded as fitting parameters, and are fitted directly to experimental data. [ 54 ] This is a common approach in the development of reference equations for gas-phase viscosities. An example of such a procedure is the Sutherland approach for the single-component gas, discussed above.
For gas mixtures consisting of simple molecules, Revised Enskog Theory has been shown to accurately represent both the density- and temperature dependence of the viscosity over a wide range of conditions. [ 55 ] [ 53 ]
As for pure liquids, the viscosity of a blend of liquids is difficult to predict from molecular principles. One method is to extend the molecular "cage" theory presented above for a pure liquid. This can be done with varying levels of sophistication. One expression resulting from such an analysis is the Lederer–Roegiers equation for a binary mixture:
where α {\displaystyle \alpha } is an empirical parameter, and x 1 , 2 {\displaystyle x_{1,2}} and μ 1 , 2 {\displaystyle \mu _{1,2}} are the respective mole fractions and viscosities of the component liquids. [ 56 ]
Since blending is an important process in the lubricating and oil industries, a variety of empirical and proprietary equations exist for predicting the viscosity of a blend. [ 56 ]
Depending on the solute and range of concentration, an aqueous electrolyte solution can have either a larger or smaller viscosity compared with pure water at the same temperature and pressure. For instance, a 20% saline ( sodium chloride ) solution has viscosity over 1.5 times that of pure water, whereas a 20% potassium iodide solution has viscosity about 0.91 times that of pure water.
An idealized model of dilute electrolytic solutions leads to the following prediction for the viscosity μ s {\displaystyle \mu _{s}} of a solution: [ 57 ]
where μ 0 {\displaystyle \mu _{0}} is the viscosity of the solvent, c {\displaystyle c} is the concentration, and A {\displaystyle A} is a positive constant which depends on both solvent and solute properties. However, this expression is only valid for very dilute solutions, having c {\displaystyle c} less than 0.1 mol/L. [ 58 ] For higher concentrations, additional terms are necessary which account for higher-order molecular correlations:
where B {\displaystyle B} and C {\displaystyle C} are fit from data. In particular, a negative value of B {\displaystyle B} is able to account for the decrease in viscosity observed in some solutions. Estimated values of these constants are shown below for sodium chloride and potassium iodide at temperature 25 °C (mol = mole , L = liter ). [ 57 ]
In a suspension of solid particles (e.g. micron -size spheres suspended in oil), an effective viscosity μ eff {\displaystyle \mu _{\text{eff}}} can be defined in terms of stress and strain components which are averaged over a volume large compared with the distance between the suspended particles, but small with respect to macroscopic dimensions. [ 59 ] Such suspensions generally exhibit non-Newtonian behavior. However, for dilute systems in steady flows, the behavior is Newtonian and expressions for μ eff {\displaystyle \mu _{\text{eff}}} can be derived directly from the particle dynamics. In a very dilute system, with volume fraction ϕ ≲ 0.02 {\displaystyle \phi \lesssim 0.02} , interactions between the suspended particles can be ignored. In such a case one can explicitly calculate the flow field around each particle independently, and combine the results to obtain μ eff {\displaystyle \mu _{\text{eff}}} . For spheres, this results in the Einstein's effective viscosity formula:
where μ 0 {\displaystyle \mu _{0}} is the viscosity of the suspending liquid. The linear dependence on ϕ {\displaystyle \phi } is a consequence of neglecting interparticle interactions. For dilute systems in general, one expects μ eff {\displaystyle \mu _{\text{eff}}} to take the form
where the coefficient B {\displaystyle B} may depend on the particle shape (e.g. spheres, rods, disks). [ 60 ] Experimental determination of the precise value of B {\displaystyle B} is difficult, however: even the prediction B = 5 / 2 {\displaystyle B=5/2} for spheres has not been conclusively validated, with various experiments finding values in the range 1.5 ≲ B ≲ 5 {\displaystyle 1.5\lesssim B\lesssim 5} . This deficiency has been attributed to difficulty in controlling experimental conditions. [ 61 ]
In denser suspensions, μ eff {\displaystyle \mu _{\text{eff}}} acquires a nonlinear dependence on ϕ {\displaystyle \phi } , which indicates the importance of interparticle interactions. Various analytical and semi-empirical schemes exist for capturing this regime. At the most basic level, a term quadratic in ϕ {\displaystyle \phi } is added to μ eff {\displaystyle \mu _{\text{eff}}} :
and the coefficient B 1 {\displaystyle B_{1}} is fit from experimental data or approximated from the microscopic theory. However, some authors advise caution in applying such simple formulas since non-Newtonian behavior appears in dense suspensions ( ϕ ≳ 0.25 {\displaystyle \phi \gtrsim 0.25} for spheres), [ 61 ] or in suspensions of elongated or flexible particles. [ 59 ]
There is a distinction between a suspension of solid particles, described above, and an emulsion . The latter is a suspension of tiny droplets, which themselves may exhibit internal circulation. The presence of internal circulation can decrease the observed effective viscosity, and different theoretical or semi-empirical models must be used. [ 62 ]
In the high and low temperature limits, viscous flow in amorphous materials (e.g. in glasses and melts) [ 64 ] [ 65 ] [ 66 ] has the Arrhenius form :
where Q is a relevant activation energy , given in terms of molecular parameters; T is temperature; R is the molar gas constant ; and A is approximately a constant. The activation energy Q takes a different value depending on whether the high or low temperature limit is being considered: it changes from a high value Q H at low temperatures (in the glassy state) to a low value Q L at high temperatures (in the liquid state).
For intermediate temperatures, Q {\displaystyle Q} varies nontrivially with temperature and the simple Arrhenius form fails. On the other hand, the two-exponential equation
where A {\displaystyle A} , B {\displaystyle B} , C {\displaystyle C} , D {\displaystyle D} are all constants, provides a good fit to experimental data over the entire range of temperatures, while at the same time reducing to the correct Arrhenius form in the low and high temperature limits. This expression, also known as Duouglas-Doremus- Ojovan model, [ 67 ] can be motivated from various theoretical models of amorphous materials at the atomic level. [ 65 ]
A two-exponential equation for the viscosity can be derived within the Dyre shoving model of supercooled liquids, where the Arrhenius energy barrier is identified with the high-frequency shear modulus times a characteristic shoving volume. [ 68 ] [ 69 ] Upon specifying the temperature dependence of the shear modulus via thermal expansion and via the repulsive part of the intermolecular potential, another two-exponential equation is retrieved: [ 70 ]
where C G {\displaystyle C_{G}} denotes the high-frequency shear modulus of the material evaluated at a temperature equal to the glass transition temperature T g {\displaystyle T_{g}} , V c {\displaystyle V_{c}} is the so-called shoving volume, i.e. it is the characteristic volume of the group of atoms involved in the shoving event by which an atom/molecule escapes from the cage of nearest-neighbours, typically on the order of the volume occupied by few atoms. Furthermore, α T {\displaystyle \alpha _{T}} is the thermal expansion coefficient of the material, λ {\displaystyle \lambda } is a parameter which measures the steepness of the power-law rise of the ascending flank of the first peak of the radial distribution function , and is quantitatively related to the repulsive part of the interatomic potential . [ 70 ] Finally, k B {\displaystyle k_{B}} denotes the Boltzmann constant .
In the study of turbulence in fluids , a common practical strategy is to ignore the small-scale vortices (or eddies ) in the motion and to calculate a large-scale motion with an effective viscosity, called the "eddy viscosity", which characterizes the transport and dissipation of energy in the smaller-scale flow (see large eddy simulation ). [ 71 ] [ 72 ] In contrast to the viscosity of the fluid itself, which must be positive by the second law of thermodynamics , the eddy viscosity can be negative. [ 73 ] [ 74 ]
Because viscosity depends continuously on temperature and pressure, it cannot be fully characterized by a finite number of experimental measurements. Predictive formulas become necessary if experimental values are not available at the temperatures and pressures of interest. This capability is important for thermophysical simulations,
in which the temperature and pressure of a fluid can vary continuously with space and time. A similar situation is encountered for mixtures of pure fluids, where the viscosity depends continuously on the concentration ratios of the constituent fluids
For the simplest fluids, such as dilute monatomic gases and their mixtures, ab initio quantum mechanical computations can accurately predict viscosity in terms of fundamental atomic constants, i.e., without reference to existing viscosity measurements. [ 75 ] For the special case of dilute helium, uncertainties in the ab initio calculated viscosity are two order of magnitudes smaller than uncertainties in experimental values. [ 76 ]
For slightly more complex fluids and mixtures at moderate densities (i.e. sub-critical densities ) Revised Enskog Theory can be used to predict viscosities with some accuracy. [ 53 ] Revised Enskog Theory is predictive in the sense that predictions for viscosity can be obtained using parameters fitted to other, pure-fluid thermodynamic properties or transport properties , thus requiring no a priori experimental viscosity measurements.
For most fluids, high-accuracy, first-principles computations are not feasible. Rather, theoretical or empirical expressions must be fit to existing viscosity measurements. If such an expression is fit to high-fidelity data over a large range of temperatures and pressures, then it is called a "reference correlation" for that fluid. Reference correlations have been published for many pure fluids; a few examples are water , carbon dioxide , ammonia , benzene , and xenon . [ 77 ] [ 78 ] [ 79 ] [ 80 ] [ 81 ] Many of these cover temperature and pressure ranges that encompass gas, liquid, and supercritical phases.
Thermophysical modeling software often relies on reference correlations for predicting viscosity at user-specified temperature and pressure.
These correlations may be proprietary . Examples are REFPROP [ 82 ] (proprietary) and CoolProp [ 83 ] (open-source).
Viscosity can also be computed using formulas that express it in terms of the statistics of individual particle
trajectories. These formulas include the Green–Kubo relations for the linear shear viscosity and the transient time correlation function expressions derived by Evans and Morriss in 1988. [ 84 ] [ 34 ] The advantage of these expressions is that they are formally exact and valid for general systems. The disadvantage is that they require detailed knowledge of particle trajectories, available only in computationally expensive simulations such as molecular dynamics .
An accurate model for interparticle interactions is also required, which may be difficult to obtain for complex molecules. [ 85 ]
Observed values of viscosity vary over several orders of magnitude, even for common substances (see the order of magnitude table below). For instance, a 70% sucrose (sugar) solution has a viscosity over 400 times that of water, and 26,000 times that of air. [ 87 ] More dramatically, pitch has been estimated to have a viscosity 230 billion times that of water. [ 86 ]
The dynamic viscosity μ {\displaystyle \mu } of water is about 0.89 mPa·s at room temperature (25 °C). As a function of temperature in kelvins , the viscosity can be estimated using the semi-empirical Vogel-Fulcher-Tammann equation :
where A = 0.02939 mPa·s, B = 507.88 K, and C = 149.3 K. [ 88 ] Experimentally determined values of the viscosity are also given in the table below. The values at 20 °C are a useful reference: there, the dynamic viscosity is about 1 cP and the kinematic viscosity is about 1 cSt.
Under standard atmospheric conditions (25 °C and pressure of 1 bar), the dynamic viscosity of air is 18.5 μPa·s, roughly 50 times smaller than the viscosity of water at the same temperature. Except at very high pressure, the viscosity of air depends mostly on the temperature. Among the many possible approximate formulas for the temperature dependence (see Temperature dependence of viscosity ), one is: [ 89 ]
which is accurate in the range −20 °C to 400 °C. For this formula to be valid, the temperature must be given in kelvins ; η air {\displaystyle \eta _{\text{air}}} then corresponds to the viscosity in Pa·s.
The following table illustrates the range of viscosity values observed in common substances. Unless otherwise noted, a temperature of 25 °C and a pressure of 1 atmosphere are assumed.
The values listed are representative estimates only, as they do not account for measurement uncertainties, variability in material definitions, or non-Newtonian behavior.
|
https://en.wikipedia.org/wiki/Viscosity
|
The viscosity index ( VI ) is an arbitrary, unit-less measure of a fluid's change in viscosity relative to temperature change. It is mostly used to characterize the viscosity-temperature behavior of lubricating oils . The lower the VI, the more the viscosity is affected by changes in temperature. The higher the VI, the more stable the viscosity remains over some temperature range. The VI was originally measured on a scale from 0 to 100; however, advancements in lubrication science have led to the development of oils with much higher VIs. [ 1 ]
The viscosity of a lubricant is closely related to its ability to reduce friction in solid body contacts. Generally, the least viscous lubricant which still forces the two moving surfaces apart to achieve " fluid bearing " conditions is desired. If the lubricant is too viscous, it will require a large amount of energy to move (as in honey ); if it is too thin, the surfaces will come in contact and friction will increase. [ 2 ]
Many lubricant applications require the lubricant to perform across a wide range of conditions, for example, automotive lubricants are required to reduce friction between engine components when the engine is started from cold (relative to the engine's operating temperatures ) up to 200 °C (392 °F) when it is running. The best oils with the highest VI will remain stable and not vary much in viscosity over the temperature range. This provides consistent engine performance within the normal working conditions. Historically, there were two different oil types recommended for usage in different weather conditions. As an example, with winter oils and cold starting the engines, and with temperature ranges from, say, −30 °C to 0 °C, a 5 weight oil would be pumpable at the very low temperatures and the generally cooler engine operating temperatures. However, in hot climates, where temperatures range from 30 °C to 45 °C, a 50 weight oil would be necessary, so it would remain thick enough to hold up an oil film between the moving hot parts.
Thus the issue of multigrade oils came into being, where with variable temperatures of, say, −10 °C during the cold nights and 20 °C during the days, a 5 weight oil would be good as the oil would be pumpable in a cold engine and as the engine came up to running temperature, and the day warmed up, the characteristics of a 30 weight oil would be ideal. Thus the 5W-30 oils were introduced, rather than the fixed and temperature limiting grades – where the thin oils became too thin when hot and the thicker oils became too thick when cold.
The effects of temperature on a single-viscosity oil can be demonstrated by pouring a small amount of vegetable oil into a pot or pan and then either cooling it in a freezer or heating it on a cooking stove. When oils get cold enough in a deep freezer, they will solidify into a block of "wax"-like oil that cannot be pumped around inside an engine's lubrication system. However, when a spoonful of very cold oil is put into a pan on a stove and it is slowly heated and swirled around, the oil will gradually warm up, and there is a definite temperature range where the oil is warm and traditionally "oily". However, as the oil is heated further, the oil becomes thinner and thinner, until it is nearly smoking and is almost as thin as water – and thus it has almost no capacity to keep moving parts separated, resulting in metal-to-metal contact and damage of the components that are supposed to be kept apart with a thin film of oil.
Thus the multigrade oils are recommended for use based on the ambient temperature ranges of the season or environment.
Additionally, there are the issues of oil temperature maintenance, such as oil or engine heaters that enable easy starting and shorter warm-up period in very cold climates, and oil coolers to dump enough heat from the oil, and thus the engine, gearbox, or hydraulic oil circuit, so as to keep the oil's upper temperature to within a specified upper working limit.
The VI scale was set up by the Society of Automotive Engineers (SAE). The temperatures chosen arbitrarily for reference are 100 and 210 °F (38 and 99 °C). The scale was originally interpolated between 0 for a naphthenic Texas Gulf crude and 100 for a paraffinic Pennsylvania crude. Since the inception of the scale, better oils have also been produced, leading to VIs greater than 100 (see below). [ 3 ]
VI improving additives and higher-quality base oils are widely used nowadays, which increase the VIs attainable beyond the value of 100. The viscosity index of synthetic oils ranges from 80 to over 400. [ citation needed ]
The viscosity index can be calculated using the following formula: [ 4 ]
where U is the oil's kinematic viscosity at 40 °C (104 °F), Y is the oil's kinematic viscosity at 100 °C (212 °F), and L and H are the viscosities at 40 °C for two hypothetical oils of VI 0 and 100 respectively, having the same viscosity at 100 °C as the oil whose VI we are trying to determine. That is, the two oils with viscosity Y at 100 °C and a VI of 0 and 100 would have at 40 °C the viscosities of L and H respectively. These L and H values can be found in tables in ASTM D2270 [ 4 ] and are incorporated in online calculators. [ 5 ]
https://xenum.com/en/engine-oil-viscosity-index/
https://www.lubricants.total.com/what-motor-oil-vi
|
https://en.wikipedia.org/wiki/Viscosity_index
|
In geometry , visibility is a mathematical abstraction of the real-life notion of visibility.
Given a set of obstacles in the Euclidean space , two points in the space are said to be visible to each other, if the line segment that joins them does not intersect any obstacles. (In the Earth's atmosphere light follows a slightly curved path that is not perfectly predictable, complicating the calculation of actual visibility.)
Computation of visibility is among the basic problems in computational geometry and has applications in computer graphics , motion planning , and other areas.
This geometry-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Visibility_(geometry)
|
The Visible Embryo Project ( VEP ) is a multi-institutional, multidisciplinary research project originally created in the early 1990s as a collaboration between the Developmental Anatomy Center at the National Museum of Health and Medicine and the Biomedical Visualization Laboratory (BVL) at the University of Illinois at Chicago , "to develop software strategies for the development of distributed biostructural databases using cutting-edge technologies for high-performance computing and communications (HPCC), and to implement these tools in the creation of a large-scale digital archive of multidimensional data on normal and abnormal human development." [ 1 ] This project related to BVL's other research in the areas of health informatics, educational multimedia, and biomedical imaging science. [ 2 ] [ 3 ] [ 4 ] Over the following decades, the list of VEP collaborators grew to include over a dozen universities, national laboratories, and companies around the world.
An early (1993) goal of the project was to enable what it called "Spatial Genomics," to create tools and systems for three-dimensional morphological mapping of gene expression , to correlate data from the Human Genome Project with the multidimensional location of genomic expression activity within the morphological context of organisms. This led to the invention in the late 1990s by VEP collaborators of the first system for Spatial transcriptomics . [ 5 ] [ 6 ] Other areas that VEP researchers pioneered include early web technologies, cloud computing, blockchain, and virtual assistant technology.
The VEP was created in 1992 as a collaboration between the UIC Biomedical Visualization Laboratory, directed by Michael Doyle, and the Human Developmental Anatomy Center at the National Museum of Health and Medicine (NMHM), directed by Adrianne Noe. Doyle had been appointed to the oversight committee of the Visible Human Project at the National Library of Medicine , but it would be several years before that data would become available. Looking for other sources of high-resolution volume data on the human anatomical structure, he came across the Carnegie Collection of Human Embryology, [ 7 ] housed at the NMHM. During a sabbatical working on methods for magnetic resonance microscopy (MRM) in the laboratory of Paul Lauterbur , Doyle created a plan for the VEP and worked with Noe to recruit a large group of prominent researchers to join as initial collaborators. [ 8 ] [ 9 ]
A primary goal of the project was to provide a testbed for the development of new technologies, and the refinement of existing ones, for the application of high-speed, high-performance computing and communications to current problems in biomedical science. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ]
Much of the early work involved creating serial section reconstructions from microscope slides and extracting volumetric data from the NMHM specimens, rather than just surface data. Sets of serial microscopic cross-sections through human embryos (prepared by Carnegie Collection contributors between the 1890s and 1970s) were used as sample image data around which to design and implement various components of the system. These images were digitized and processed to create 3D voxel datasets representing embryonic anatomy. Standard techniques for 3D volume visualization could then be applied to these data. [ 1 ] [ 3 ] Image processing of these data was required to correct for certain artifacts that were found in the original microscope sections from routine histological techniques of the tissue preparation.
Later activities of the project would make use of MRM datasets acquired from the NMHM collection, ultra-high resolution histology images, [ 16 ] and three-dimensional adult image data acquired via the Visible Human Project , in addition to embryo data.
The VEP became a far-reaching collaborative research program involving a large number of eminent scientists across the nation and around the world, including, among many others, Michael Doyle, of UIC, then UCSF, and project founder, Adrianne Noe, Director of the National Museum of Health and Medicine , George Washington University 's Robert Ledley , inventor of the Full-body CT scanner , UIUC 's Paul Lauterbur , MRI pioneer and Nobel laureate , LSU 's Ray Gasser, eminent embryologist, Oregon Health & Science University 's Kent Thornburg, internationally renown developmental biologist, Regan Moore, Director of the DICE group at the San Diego Supercomputer Center , William Lennon of Lawrence Livermore National Laboratory , Ingrid Carlbom of Digital Equipment Corporation 's Cambridge Research Lab, and Demetri Terzopoulos of the University of Toronto .
Some notable Visible Embryo Project collaborations include:
In the mid-1990s, Michael Doyle collaborated with Harvard 's Betsey Williams to create an internet atlas of mouse development, in a project named "Muritech." A prototype two- and three-dimensional color atlas of mouse development was developed, using two embryos, a 13.5 d normal mouse embryo and a PATCH mutant embryo of the same age. Serial sections of the embryos, with an external registration marker system, introduced into the paraffin embedding process, were prepared by standard histological methods. For the 2D atlas, color images were digitized from 100 consecutive sections of the normal embryo. For the 3D atlas, 300 gray-scale images digitized from the mutant embryo were conformally warped and reconstructed into a 3D volume dataset. The external fiducial system facilitated the three-dimensional reconstruction by providing accurate registration of consecutive images and also allowed for precise spatial calibration and the correction of warping artifacts. The atlases, with their associated anatomical knowledge base, were then integrated into a multimedia online information resource via the VEP's Web technology to provide research biologists with a set of advanced tools to analyze normal and abnormal murine embryonic development. [ 17 ]
The Human Embryology Digital Library and Collaboratory Support Tools project was begun in 1999 as a demonstration of the biomedical application potential of the Next Generation Internet (NGI). The collaborators included eight organizations at sites around the continental USA, a mix of medical and information technology organizations, including George Mason University , Eolas , the Armed Forces Institute of Pathology , Johns Hopkins University , Lawrence Livermore National Laboratory , the Oregon Health & Science University , the San Diego Supercomputer Center , and the University of Illinois at Chicago . The project undertook three major applications, based on the Carnegie Collection of Embryos at the AFIP's National Museum of Health and Medicine Human Development Anatomy Center (HDAC), a collection of cellular-level tissue slides that is one of the world's largest repositories of human embryos.
These applications included:
1. Digitization, curation, and annotation of embryo data: The VEP team created a production digitization capability, using automated digital microscopy, with data automatically registered for tiling and transmitted to the repository at the San Diego Supercomputer Center , and annotated by teams of biomedical volunteers with expert-level quality control.
2. Distributed embryology education using materials derived from the Carnegie Collection to create animations of embryo development and recorded master classes that can be streamed over the Internet or downloaded to create a portable electronic classroom.
3. Clinical management planning where medical professionals and expectant parent patients can review normal and abnormal development patterns with collaborative consultation from distant experts. [ 9 ]
To enable new ways to interactively explore the VEP's massive volume datasets, Michael Doyle created the zMap system, using the Visible Human Project image data for the first prototype. In 2011, Doyle collaborated with Steven Landers, Maurice Pescitelli, and others to use zMap to create an interactive tool that allows the user to select desired sets of anatomical structures for the automated generation of 3D Quicktime VR visualizations. The system used the resources available in the Eolas AnatLab knowledgebase, which has over 2200 structures identified involving a total of over 4600 sections and 700,000 annotations overall, to access the anatomical structure surface information for individual structures. This surface information was then used to automatically extract the contained volumetric image data and convert the data into a format compatible with the Osirix volume imaging system. Automated scripts then controlled Osirix in the creation of a 3D visualization of the group of selected anatomical structures. Photorealistic results were obtained by using the original color voxel information from the original Visible Human cryosection images to color the surface of the 3D reconstruction. The system then automatically progressed through a pre-defined set of rotations to generate the set of image frames required to create a Quicktime VR (QTVR) interactive movie. This system thereby allowed an anatomy instructor to quickly and easily generate customized interactive 3D reconstructions for use in the classroom. [ 18 ]
Over the decades since it was begun, the work done in the Visible Embryo Project has led to the development of several important technological breakthroughs that have had a worldwide impact:
Even though spatial mapping of Omics data had been described as an initial goal of the VEP, it wasn't until 1999 that four VEP collaborators, Michael Doyle, George Michaels, Maurice Pescitelli, and Betsey Williams worked together to create a system for what they called "spatial genomics." [ 5 ] Today, this technology is known as Spatial transcriptomics . As their 2001 U.S. patent application states, [ 5 ] their system solved the need "to gather gene expression data in a manner that supports the type of exploratory research that can take advantage of the broad-spectrum types of biologic activity analysis enabled by today's microarray tools," as well as the need for "technology to allow the collection of large volumes of these types of data, to enable exploratory investigations into patterns of biologic activity ... to correlate gene expression data with morphological structure in a useful and easy to understand manner, such as in a volume visualization environment ... to allow the collection of larger volumes of gene expression data across a wider spectrum of gene types than ever before."
They named their system SAGA, short for Spatial Analysis of Genomic Activity. As described in the related U.S. patents, [ 5 ] [ 19 ] [ 20 ] the SAGA system enabled the multidimensional morphological reconstruction of tissue biologic activity and "makes it possible for biological tissue specimens to be imaged in multiple dimensions to allow morphological reconstruction. The same tissue specimen is physically sampled in a regular raster array, so that tissue samples are taken in a regular multidimensional matrix pattern across each of the dimensions of the tissue specimen. Each sample is isolated and coded so it can be later correlated to the specific multidimensional raster array coordinates, thereby providing a correlation with the sample's original pre-sampling morphological location in the tissue specimen. Each tissue sample is then analyzed with broad-spectrum biological activity methods, providing information on a multitude of biologic functional characteristics [mRNA, etc.] for that sample. The resultant raster-based biological characteristic data may then be spatially mapped into the original multidimensional morphological matrix of image data. ... various types of analysis may then be performed on the resultant correlated multidimensional spatial datasets."
Spatial transcriptomics was named the "Method of the Year for 2020" by Nature , in January 2021. [ 21 ]
In 1993, Doyle became the Director of the UCSF Center for Knowledge Management (CKM). To create the underlying software and hardware that would provide the needed computational power for the VEP, Doyle's CKM group designed a new paradigm for performing remote client-server volume visualization over the Internet. [ 10 ] This involved creating a system for remotely computing visualizations through a networked cluster, or cloud, of distributed heterogeneous computational engines, and coordinating the computations to pass user interface control messages to those engines, causing the cloud computers to generate new rendered visualizations and stream the resulting views to the users' desktops, while delta-encoding and compressing the streamed data to optimize performance over low-bandwidth connections. [ 22 ]
To hide the complexity of the system from the user, they modified one of the earliest versions of the NCSA Mosaic Web browser [ 23 ] to allow their interactive cloud-computing applications to be automatically launched and run embedded within Web pages, so any user would need only to load a Web document from the VEP and would be able to immediately interactively explore the project's multidimensional datasets, rather than static representations of those datasets. [ 24 ]
In November 1993, the CKM's VEP research group demonstrated this system, the first Web-based Cloud application platform , on-stage to a meeting of approximately 300 Bay Area SIGWEB members at Xerox PARC. [ 25 ]
Today, this capability is called " the Cloud ." The VEP team's work opened the door to the potential of the Web to provide rich information resources to users, regardless of where they were located and spawned a multi-trillion-dollar industry as a result. [ 26 ]
Doyle then began to focus more directly on the problem of how to navigate within these complex biomedical volume datasets and developed a system for mapping the semantic identity of morphological structures within the datasets and integrating those mappings with the hypermedia linking mechanism of the Web. This led to the creation of the first three-dimensional Web image map system and was used to create a variety of online interactive reference systems for biomedical education and research throughout the 90s and beyond. [ 27 ] [ 28 ]
One of the challenges for large collaborative knowledge bases is how to assure the integrity of data over a long period of time. Standard cryptographic methods that depend upon trusting a central validation authority are vulnerable to a variety of factors that can lead to data corruption, including hacking, data breaches, insider fraud, and the possible disappearance of the validating authority itself. To solve this problem for the VEP data collections, Doyle created a novel type of cryptographic system, called Transient-key cryptography . This system allows the validation of data integrity without requiring users of the system to trust any central authority, and also represented the first decentralized blockchain system, enabling the later creation of the Bitcoin system. In the mid-2000s, this technology was adopted as a national standard in the ANSI ASC X9.95 Standard for trusted timestamps . [ 29 ] [ 30 ]
Since the mid-2000s, the VEP team has made great use of digital voice and text communications systems, to facilitate communications among geographically-distributed team members. To increase the efficiency of these communications, Michael Doyle and Steve Landers collaborated to create the Skybot system, the first AI-based mobile virtual assistant system. Skybot used the power and flexibility of AI to dramatically expand the use of messaging systems. Using Skybot, one could create a variety of programmable responses to incoming calls and chat messages. The system incorporated a state machine that could be configured to automatically trigger automated responses to various communication and user-context events. This provided the user with a surprisingly broad and powerful set of capabilities for automating mobile communication operations and pioneered the mobile intelligent-assistant product category that is now ubiquitous worldwide. [ 31 ] [ 32 ]
Plans are underway to secure the funding necessary to expand the Visible Embryo Project to create a national resource that combines large-scale knowledgebase with advanced analytical tools in an innovative online collaborative environment to support and continue to advance the art and science of Spatial Omics. [ 33 ]
|
https://en.wikipedia.org/wiki/Visible_Embryo_Project
|
The Visible Multi-Object Spectrograph ( VIMOS ) is a wide field imager and a multi-object spectrograph installed at the European Southern Observatory 's Very Large Telescope (VLT), in Chile. The instrument used for deep astronomical surveys delivers visible images and spectra of up to 1,000 galaxies at a time. [ 1 ] [ 2 ] VIMOS images four rectangular areas of the sky, 7 by 8 arcminutes each, with gaps of 2 arcminutes between them. [ 1 ] Its principal investigator was Olivier Le Fèvre .
The Franco-Italian instrument operates in the visible part of the spectrum from 360 to 1000 nanometers (nm). In the conceptual design phase, the multi-object spectrograph then called VIRMOS included an additional instrument, NIMOS, operating in the near-infrared spectrum of 1100–1800 nm. [ 3 ]
Operating in the three different observation modes, direct imaging, multi-slit spectroscopy, and integral field spectroscopy, the main objective of the instrument is to study the early universe through massive redshift surveys , such as the VIMOS-VLT Deep Survey . [ 4 ]
VIMOS saw its first light on 26 February 2002, and has since been mounted on the Nasmyth B focus of VLT's Melipal unit telescope (UT3). [ 5 ] [ 6 ]
It was retired in 2018 to make space for the return of CRIRES+. [ 7 ]
|
https://en.wikipedia.org/wiki/Visible_Multi_Object_Spectrograph
|
Vision rehabilitation (often called vision rehab ) is a term for a medical rehabilitation to improve vision or low vision . In other words, it is the process of restoring functional ability and improving quality of life and independence in an individual who has lost visual function through illness or injury. [ 1 ] [ 2 ] Most visual rehabilitation services are focused on low vision, which is a visual impairment that cannot be fully corrected by regular eyeglasses, contact lenses, medication, or surgery. Low vision interferes with the ability to perform everyday activities. [ 3 ] Visual impairment is caused by factors including brain damage , vision loss , and others. [ 4 ] Of the vision rehabilitation techniques available, most center on neurological and physical approaches. According to the American Academy of Ophthalmology, "Provision of, or referral to, vision rehabilitation is now the standard of care for all who experience vision loss.." [ 5 ]
Rehabilitation (literally, the act of making able again) helps patients achieve physical, social, emotional, spiritual independence and quality of life. [ 6 ] Rehabilitation does not undo or reverse the cause of damage; it seeks to promote function and independence through adaptation. Individuals can seek rehabilitation in different domains, such as motor rehabilitation after a stroke or physical rehabilitation after a car accident. [ 7 ] Low vision can be caused by many diseases. [ 6 ]
There are many treatments and therapies to slow degradation of vision loss or improve the vision using neurological approaches. Studies have found that low vision can be restored to good vision. [ 4 ] [ 8 ] In some cases, vision cannot be restored to normal levels but progressive visual loss can be stopped through interventions. [ 6 ]
In general, chemical treatments are designed to slow the process of vision loss. Some research is done with neuroprotective treatment that will slow the progression of vision loss. [ 9 ] Despite other approaches existing, neuroprotective treatments seem to be most common among all chemical treatments.
Gene therapy uses DNA as a delivery system to treat visual impairments. In this approach, DNA is modified through a viral vector, and then cells related to vision cease translating faulty proteins. [ 10 ] Gene therapy seems to be the most prominent field that might be able to restore vision through therapy. However, research indicates gene therapy may worsen symptoms, cause them to last longer or lead to further complications.
For physical approaches to vision rehabilitation, most of the training is focused on ways to make environments easier to deal with for those with low vision. Occupational therapy is commonly suggested for these patients. [ 11 ] Also, there are devices that help patients achieve higher standards of living. These include video magnifiers, peripheral prism glasses, transcranial direct current stimulation (tDCS), closed-circuit television (CCTV), RFID devices, electronic badges with emergency alert systems, virtual sound systems, and smart wheelchairs.
Mobility training improves the ability for patients with visual impairment to live independently by training patients to become more mobile. [ 12 ] For low vision patients, there are multiple mobility training methods and devices available including the 3D sound virtual reality system, talking braille , and RFID floors.
The 3D sound virtual reality system transforms sounds into locations and maps the environment. [ 13 ] This system alerts patients to avoid possible dangers. The talking braille is a device that helps low vision patients to read braille by detecting light and transmitting this information through Bluetooth technology. [ 14 ] RFID floors are GPS-like navigation systems which help patients to detect building interiors, which ultimately allow them to detour around obstacles. [ 15 ]
Home skills training allows patients to improve communication skills, self-care skills, cognitive skills, socialization skills, vocational training, psychological testing, and education. [ 16 ] One study indicates that multicomponent group interventions for older adults with low vision as an effective approach related to home training. [ 17 ] The multicomponent group interventions include learning new knowledge or skills each week, having multiple sessions to allow participants to apply learned knowledge or skills in their living environment, and building relationships with their health care providers. [ 18 ] The most important factor in this intervention is support from family, which includes assistance with changes in lifestyles, financial concerns, and future planning. [ 19 ]
The field of vision rehabilitation therapy is made up of professionals who provide specialized services to individuals who are blind or who have a vision loss that cannot be corrected with prescription lenses, medication, or surgery. Professionals who work in this field are called Vision Rehabilitation Therapists [ 20 ] (VRTs) or Rehabilitation Teachers [ 20 ] (RTs). A vision rehabilitation therapist, VRT, is a professional who provides specialized instruction and guidance to individuals who are blind or have low vision. Best practice recommends professionals who work in this field be nationally certified. [ 21 ] To obtain certification as a VRT, professionals must complete a course of study through a university program , complete a 350-hour internship, and pass a certification examination. [ 22 ] The certifying body for VRTs is the Academy for Certification of Vision Rehabilitation and Education Professionals, ACVREP. [ 20 ] The ACVREP certification for a VRT is called Certified Vision Rehabilitation Therapist and the certified professional uses the letters CVRT indicating this credential. Scope of Practice A VRT works within the scope of practice outlined by ACVREP. [ 22 ] The VRT provides Instruction in the use of adaptive skills and strategies to help individuals with vision loss to safely meet their personal goals for employment, education, and independence in the workplace, home, and community. Training from a VRT may include:
The VRT serves individuals of any age, whether vision loss is present at birth or if acquired later in life. Individuals with any level of visual impairment, whether partial or total, may benefit from services provided by the VRT. Services provided by a VRT are comprehensive taking into consideration visual abilities, other physical limitations, social supports, and emotional adjustment to vision loss. Instruction with a VRT often uses strategies which include other senses to complete tasks, use of devices that enhance low vision or increase accessibility, and problem-based learning.
Vision Rehabilitation Therapists are hired by state vocational rehabilitation programs, non-profit agencies, veterans’ administration (VA) hospitals, [ 23 ] or they may choose to be self-employed, working as private contractors. A VRT may provide their services one-on-one or in a group setting. Many services are provided in the home of the client with vision loss, so that environmental factors can be assessed, and specific strategies practiced in the location where tasks need to be completed. Services might also be provided in the client’s workplace or educational institution, a community center, rehab residential facility, or in the community. The vision rehabilitation therapist may also work as part of a rehabilitation team, which may include an Orientation and Mobility (O&M) Specialist (COMS), Certified Assistive Technology Instructional Specialist (CATIS) , and Low Vision Therapist (CLVT) to provide comprehensive rehabilitation services.
Occupational therapists can assess how low vision affects day-to-day function. [ 24 ] They can promote independence in daily activities through home assessments and modifications, problem solving training, home exercise programs and finding compensatory strategies. [ 25 ] [ 24 ] For example, an occupational therapist can suggest adding lighting and contrast to a room to improve visibility. [ 24 ]
|
https://en.wikipedia.org/wiki/Vision_rehabilitation
|
A Visite du Branchage is an inspection of roads in Jersey and Guernsey to ensure property owners have complied with the laws against vegetation encroaching onto the road.
The Visite du Branchage takes place in each parish twice a year to check that occupiers of houses and land bordering on public roads have undertaken the 'branchage'.
The Loi (1914) sur la Voirie imposes a duty on all occupiers of property to ensure that encroachments are removed from the public highway.
The first Visite is between 24 June – 15 July and the second is between 1 – 21 September. [ 1 ]
On the Visite du Branchage the connétable , assisted by the members of the Roads Committee , Roads Inspectors and the centeniers , will visit the roads of his parish accompanied by the vingteniers in their respective Vingtaines to ensure that the branchage has been completed. Occupiers of land may be fined up to £50 for each infraction unless -
If the branchage has not been completed the occupier will be required to undertake the work and, if it is not carried out, the parish may arrange for the work to be done and charge the occupier the cost of that work.
The Visite du Branchage applies to all public roads including main roads, by-roads and footpaths.
The Branchage Film Festival, takes its name from Visite du Branchage.
This Channel Islands article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Visite_du_Branchage
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.